1
0
mirror of https://github.com/IBM/fp-go.git synced 2026-01-13 00:44:11 +02:00

Compare commits

...

87 Commits

Author SHA1 Message Date
Dr. Carsten Leue
f154790d88 fix: optimize iter a bit
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2026-01-12 18:54:33 +01:00
Carsten Leue
e010f13dce fix: initial version of circuit breaker (#151)
* fix: add circuitbreaker and doc

Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>

* fix: refactor and more low level tests

Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>

* fix: document thread safety

Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>

* fix: add stateio

Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>

* fix: documentation of StateIO monad

Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>

* fix: initial version of circuitbreaker

Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>

---------

Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2026-01-12 18:19:39 +01:00
Carsten Leue
86a260a204 Introduce IORef (#150)
* fix: add ioref and tests

Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>

* fix: better tests

Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>

---------

Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2026-01-04 16:45:40 +01:00
lif
6a6b982779 feat: Add OrElse to ioeither for error recovery (#148)
* feat: add OrElse to ioeither for error recovery

Add OrElse function to both v1 and v2 ioeither packages for error recovery.
This allows recovering from a Left value by applying a function to the error
and returning a new IOEither, consistent with the Either package's API.

- Add OrElse to ioeither/generic/ioeither.go
- Add OrElse wrapper to ioeither/ioeither.go
- Add OrElse to v2/ioeither/generic/ioeither.go
- Add OrElse to v2/ioeither/ioeither.go
- Add comprehensive tests for both v1 and v2

Closes #146

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Signed-off-by: majiayu000 <1835304752@qq.com>

* chore(v2): drop ioeither OrElse addition

---------

Signed-off-by: majiayu000 <1835304752@qq.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-04 16:41:13 +01:00
Carsten Leue
9d31752887 Rewrite the Retry logic based on Trampoline (#149)
* fix: implement retry via tail rec

Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>

* fix: base retry on Trampoline

Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>

* fix: refactor retry

Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>

---------

Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2026-01-02 15:43:51 +01:00
Carsten Leue
14b52568b5 Add OrElse consistently and improve docs (#147)
* fix: OrElse

Signed-off-by: Carsten Leue <carsten.leue@de.ibm.com>

* fix: improve tests

Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>

* fix: FilterOrElse

Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>

* fix: tests and doc

Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>

* fix: add sample

Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>

* fix: add tests for CopyFile

Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>

* fix: signature of Close

Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>

---------

Signed-off-by: Carsten Leue <carsten.leue@de.ibm.com>
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-31 15:59:10 +01:00
Dr. Carsten Leue
49227551b6 fix: more iter methods
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-22 15:03:47 +01:00
Dr. Carsten Leue
69691e9e70 fix: iterators
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-20 16:38:36 +01:00
Dr. Carsten Leue
d3c466bfb7 fix: some cleanup
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-19 13:18:49 +01:00
Dr. Carsten Leue
a6c6ea804f fix: overhaul record
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-18 18:32:45 +01:00
Dr. Carsten Leue
31ff98901e fix: latest doc fixes
BREAKING CHANGE: new v2

Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-18 16:59:23 +01:00
Dr. Carsten Leue
255cf4353c fix: better formatting
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-18 16:07:26 +01:00
Dr. Carsten Leue
4dfc1b5a44 fix: better doc and implementation of retry
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-17 16:28:28 +01:00
Dr. Carsten Leue
20398e67a9 fix: better doc and implementation of retry
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-17 15:58:11 +01:00
Dr. Carsten Leue
fceda15701 doc: improve docs
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-17 10:11:58 +01:00
Dr. Carsten Leue
4ebfcadabe fix: add better tests
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-16 14:03:01 +01:00
Dr. Carsten Leue
acb601fc01 fix: reuse some more code
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-15 16:30:40 +01:00
Dr. Carsten Leue
d17663f016 fix: better doc
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-15 11:16:09 +01:00
Dr. Carsten Leue
829365fc24 doc: improve docs
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-12 13:30:10 +01:00
Dr. Carsten Leue
64b5660b4e doc: remove some comments
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-12 12:35:53 +01:00
Dr. Carsten Leue
16e82d6a65 fix: better cancellation support
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-12 11:52:43 +01:00
Dr. Carsten Leue
0d40fdcebb fix: implement tail recursion
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-12 11:18:32 +01:00
Dr. Carsten Leue
6a4dfa2c93 fix: better doc
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-11 16:18:55 +01:00
Dr. Carsten Leue
a37f379a3c fix: semantic of MapTo and ChainTo and update tests
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-11 09:09:44 +01:00
Dr. Carsten Leue
ece0cd135d fix: add more tests and logging
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-10 18:23:19 +01:00
Dr. Carsten Leue
739b6a284c fix: better slog based logging
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-09 17:52:57 +01:00
Dr. Carsten Leue
ba10d8d314 doc: fix docs
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-09 13:00:03 +01:00
Dr. Carsten Leue
3d6c419185 fix: add better logging
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-09 12:49:44 +01:00
Dr. Carsten Leue
3f4b6292e4 fix: optimize Traverse
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-05 21:35:05 +01:00
Dr. Carsten Leue
b1704b6d26 fix: implement TraverseReader
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-05 17:51:13 +01:00
Dr. Carsten Leue
ffdfd218f8 fix: implement Flip for Reader
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-05 11:04:49 +01:00
Dr. Carsten Leue
34826d8c52 fix: Ask and add tests to retry
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-04 16:47:53 +01:00
Dr. Carsten Leue
24c0519cc7 fix: try to unify type signatures
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-04 16:31:21 +01:00
Dr. Carsten Leue
ff48d8953e fix: implement some missing methods in reader io
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-04 13:50:25 +01:00
Dr. Carsten Leue
d739c9b277 fix: add doc to readerio
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-12-03 18:13:59 +01:00
Dr. Carsten Leue
f0054431a5 fix: add logging to readerio 2025-12-03 18:07:06 +01:00
Carsten Leue
1a89ec3df7 fix: implement Sequence for Pair
Signed-off-by: Carsten Leue <carsten.leue@de.ibm.com>
2025-11-28 11:22:23 +01:00
Carsten Leue
f652a94c3a fix: add template based logger
Signed-off-by: Carsten Leue <carsten.leue@de.ibm.com>
2025-11-28 10:11:08 +01:00
Dr. Carsten Leue
774db88ca5 fix: add name to prism
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-27 13:26:36 +01:00
Dr. Carsten Leue
62a3365b20 fix: add conversion prisms for numbers
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-27 13:12:18 +01:00
Dr. Carsten Leue
d9a16a6771 fix: add reduce operations to readerioresult
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-26 17:00:10 +01:00
Dr. Carsten Leue
8949cc7dca fix: expose stats
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-26 13:44:40 +01:00
Dr. Carsten Leue
fa6b6caf22 fix: generic order for reader.Flap
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-26 12:53:13 +01:00
Dr. Carsten Leue
a1e8d397c3 fix: better doc and some helpers
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-26 12:06:09 +01:00
Dr. Carsten Leue
dbe7102e43 fix: better doc and some helpers
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-26 12:05:31 +01:00
Dr. Carsten Leue
09aeb996e2 fix: add GetOrElseOf
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-24 18:57:30 +01:00
Dr. Carsten Leue
7cd575d95a fix: improve Prism and Optional
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-24 18:22:52 +01:00
Dr. Carsten Leue
dcfb023891 fix: improve assertions
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-24 17:28:48 +01:00
Dr. Carsten Leue
51cf241a26 fix: add ReaderK
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-24 12:29:55 +01:00
Dr. Carsten Leue
9004c93976 fix: add some idomatic helpers
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-24 10:40:58 +01:00
Dr. Carsten Leue
d8ab6b0ce5 fix: ChainReaderK
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-22 10:39:56 +01:00
Dr. Carsten Leue
4e9998b645 fix: benchmarks and better docs
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-21 15:39:41 +01:00
Dr. Carsten Leue
2ea9e292e1 fix: idiomatic/readeresult
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-21 15:25:59 +01:00
Dr. Carsten Leue
12a20e30d1 fix: implement BindReaderK
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-21 13:01:27 +01:00
Dr. Carsten Leue
4909ad5473 fix: add missing monoid
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-21 10:22:50 +01:00
Dr. Carsten Leue
d116317cde fix: add readerresult
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-21 10:04:28 +01:00
Dr. Carsten Leue
1428241f2c fix: race condition
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-21 08:36:07 +01:00
Dr. Carsten Leue
ef9216bad7 fix: documentation, tests, some utilities
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-20 08:43:15 +01:00
Dr. Carsten Leue
fe77c770b6 fix: cleanup types
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-19 17:36:49 +01:00
Dr. Carsten Leue
1c42b2ac1d fix: implement idiomatic/ioresult
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-19 15:39:02 +01:00
Dr. Carsten Leue
cbd93fdecc fix: add statereaderioresult
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-18 17:54:04 +01:00
Dr. Carsten Leue
6d94697128 fix: document statereaderioeither
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-18 16:06:56 +01:00
Dr. Carsten Leue
77dde302ef Merge branch 'main' of github.com:IBM/fp-go 2025-11-18 10:59:57 +01:00
Dr. Carsten Leue
909d626019 fix: serveral performance improvements
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-18 10:58:24 +01:00
renovate[bot]
b01a8f2aff chore(deps): update actions/checkout action to v4.3.1 (#145)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-18 06:31:59 +00:00
Dr. Carsten Leue
8a2e9539b1 fix: add result
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-17 20:36:06 +01:00
Dr. Carsten Leue
03d9720a29 fix: optimize performance for option
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-17 12:19:24 +01:00
Dr. Carsten Leue
57794ccb34 fix: add idiomatic go options package
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-17 11:10:27 +01:00
Dr. Carsten Leue
404eb875d3 fix: add idiomatic version
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-16 17:27:16 +01:00
Dr. Carsten Leue
ed108812d6 fix: modernize codebase
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-15 17:00:22 +01:00
Dr. Carsten Leue
ab868315d4 fix: traverse
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-15 12:13:37 +01:00
Dr. Carsten Leue
02d0be9dad fix: add traversal for sequences
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-14 14:12:44 +01:00
Dr. Carsten Leue
2c1d8196b4 fix: support go iterators and cleanup types
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-14 12:56:12 +01:00
Dr. Carsten Leue
17eb8ae66f fix: add Chain...Left methods
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-13 16:51:15 +01:00
Dr. Carsten Leue
b70e481e7d fix: some minor improvements
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-13 12:56:51 +01:00
Dr. Carsten Leue
3c3bb7c166 fix: improve lens implementation
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-13 12:15:52 +01:00
Dr. Carsten Leue
d3007cbbfa fix: improve lens generator
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-13 09:39:18 +01:00
Dr. Carsten Leue
5aa0e1ea2e fix: handle non comparable types
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-13 09:35:56 +01:00
Dr. Carsten Leue
d586428cb0 fix: examples
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-13 09:05:57 +01:00
Dr. Carsten Leue
d2dbce6e8b fix: improve lens handling
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-12 18:23:57 +01:00
Dr. Carsten Leue
6f7ec0768d fix: improve lens generation
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-12 17:28:20 +01:00
Dr. Carsten Leue
ca813b673c fix: better tests and doc
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-12 16:24:12 +01:00
Dr. Carsten Leue
af271e7d10 fix: better endo and lens
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-12 15:03:55 +01:00
Dr. Carsten Leue
567315a31c fix: make a distinction between Chain and Compose for endomorphism
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-12 13:51:00 +01:00
Dr. Carsten Leue
311ed55f06 fix: add Read method to Readers
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-12 11:59:20 +01:00
Dr. Carsten Leue
23333ce52c doc: improve doc
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-12 11:08:18 +01:00
Dr. Carsten Leue
eb7fc9f77b fix: better tests for Lazy
Signed-off-by: Dr. Carsten Leue <carsten.leue@de.ibm.com>
2025-11-12 10:46:07 +01:00
877 changed files with 145620 additions and 7533 deletions

View File

@@ -28,7 +28,7 @@ jobs:
fail-fast: false # Continue with other versions if one fails
steps:
# full checkout for semantic-release
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4.3.1
with:
fetch-depth: 0
- name: Set up Go ${{ matrix.go-version }}
@@ -66,7 +66,7 @@ jobs:
matrix:
go-version: ['1.24.x', '1.25.x']
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4.3.1
with:
fetch-depth: 0
- name: Set up Go ${{ matrix.go-version }}
@@ -126,7 +126,7 @@ jobs:
steps:
# full checkout for semantic-release
- name: Full checkout
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4.3.1
with:
fetch-depth: 0

View File

@@ -369,6 +369,11 @@ func ToIOOption[GA ~func() O.Option[A], GEA ~func() ET.Either[E, A], E, A any](i
)
}
// OrElse returns the original IOEither if it is a Right, otherwise it applies the given function to the error and returns the result.
func OrElse[GA ~func() ET.Either[E, A], E, A any](onLeft func(E) GA) func(GA) GA {
return eithert.OrElse(IO.MonadChain[GA, GA, ET.Either[E, A], ET.Either[E, A]], IO.Of[GA, ET.Either[E, A]], onLeft)
}
func FromIOOption[GEA ~func() ET.Either[E, A], GA ~func() O.Option[A], E, A any](onNone func() E) func(ioo GA) GEA {
return IO.Map[GA, GEA](ET.FromOption[A](onNone))
}

View File

@@ -266,6 +266,11 @@ func Alt[E, A any](second L.Lazy[IOEither[E, A]]) func(IOEither[E, A]) IOEither[
return G.Alt(second)
}
// OrElse returns the original IOEither if it is a Right, otherwise it applies the given function to the error and returns the result.
func OrElse[E, A any](onLeft func(E) IOEither[E, A]) func(IOEither[E, A]) IOEither[E, A] {
return G.OrElse[IOEither[E, A]](onLeft)
}
func MonadFlap[E, B, A any](fab IOEither[E, func(A) B], a A) IOEither[E, B] {
return G.MonadFlap[IOEither[E, func(A) B], IOEither[E, B]](fab, a)
}

View File

@@ -134,3 +134,44 @@ func TestApSecond(t *testing.T) {
assert.Equal(t, E.Of[error]("b"), x())
}
func TestOrElse(t *testing.T) {
// Test that OrElse recovers from a Left
recover := OrElse(func(err string) IOEither[string, int] {
return Right[string](42)
})
// When input is Left, should recover
leftResult := F.Pipe1(
Left[int]("error"),
recover,
)
assert.Equal(t, E.Right[string](42), leftResult())
// When input is Right, should pass through unchanged
rightResult := F.Pipe1(
Right[string](100),
recover,
)
assert.Equal(t, E.Right[string](100), rightResult())
// Test that OrElse can also return a Left (propagate different error)
recoverOrFail := OrElse(func(err string) IOEither[string, int] {
if err == "recoverable" {
return Right[string](0)
}
return Left[int]("unrecoverable: " + err)
})
recoverable := F.Pipe1(
Left[int]("recoverable"),
recoverOrFail,
)
assert.Equal(t, E.Right[string](0), recoverable())
unrecoverable := F.Pipe1(
Left[int]("fatal"),
recoverOrFail,
)
assert.Equal(t, E.Left[int]("unrecoverable: fatal"), unrecoverable())
}

View File

@@ -0,0 +1,17 @@
{
"permissions": {
"allow": [
"Bash(ls -la \"c:\\d\\fp-go\\v2\\internal\\monad\"\" && ls -la \"c:dfp-gov2internalapplicative\"\")",
"Bash(ls -la \"c:\\d\\fp-go\\v2\\internal\\chain\"\" && ls -la \"c:dfp-gov2internalfunctor\"\")",
"Bash(go build:*)",
"Bash(go test:*)",
"Bash(go doc:*)",
"Bash(go tool cover:*)",
"Bash(sort:*)",
"Bash(tee:*)",
"Bash(find:*)"
],
"deny": [],
"ask": []
}
}

482
v2/BENCHMARK_COMPARISON.md Normal file
View File

@@ -0,0 +1,482 @@
# Benchmark Comparison: Idiomatic vs Standard Either/Result
**Date:** 2025-11-18
**System:** AMD Ryzen 7 PRO 7840U w/ Radeon 780M Graphics (16 cores)
**Go Version:** go1.23+
This document provides a detailed performance comparison between the optimized `either` package and the `idiomatic/result` package after recent optimizations to the either package.
## Executive Summary
After optimizations to the `either` package, the performance characteristics have changed significantly:
### Key Findings
1. **Constructors & Predicates**: Both packages now perform comparably (~1-2 ns/op) with **zero heap allocations**
2. **Zero-allocation insight**: The `Either` struct (24 bytes) does NOT escape to heap - Go returns it by value on the stack
3. **Core Operations**: Idiomatic package has a **consistent advantage** of 1.2x - 2.3x for most operations
4. **Complex Operations**: Idiomatic package shows **massive advantages**:
- ChainFirst (Right): **32.4x faster** (87.6 ns → 2.7 ns, 72 B → 0 B)
- Pipeline operations: **2-3x faster** with lower allocations
5. **All simple operations**: Both maintain **zero heap allocations** (0 B/op, 0 allocs/op)
### Winner by Category
| Category | Winner | Reason |
|----------|--------|--------|
| Constructors | **TIE** | Both ~1.3-1.8 ns/op |
| Predicates | **TIE** | Both ~1.2-1.5 ns/op |
| Simple Transformations | **Idiomatic** | 1.2-2x faster |
| Monadic Operations | **Idiomatic** | 1.2-2.3x faster |
| Complex Chains | **Idiomatic** | 32x faster, zero allocs |
| Pipelines | **Idiomatic** | 2-2.4x faster, fewer allocs |
| Extraction | **Idiomatic** | 6x faster (GetOrElse) |
## Detailed Benchmark Results
### Constructor Operations
| Operation | Either (ns/op) | Idiomatic (ns/op) | Speedup | Either Allocs | Idio Allocs |
|-----------|----------------|-------------------|---------|---------------|-------------|
| Left | 1.76 | **1.35** | **1.3x** ✓ | 0 B/op | 0 B/op |
| Right | 1.38 | 1.43 | 1.0x | 0 B/op | 0 B/op |
| Of | 1.68 | **1.22** | **1.4x** ✓ | 0 B/op | 0 B/op |
**Analysis:** Both packages perform extremely well with **zero heap allocations**. Idiomatic has a slight edge on Left and Of.
**Important Clarification: Neither Package Escapes to Heap**
A common misconception is that struct-based Either escapes to heap while tuples stay on stack. The benchmarks prove this is FALSE:
```go
// Either package - NO heap allocation
type Either[E, A any] struct {
r A // 8 bytes
l E // 8 bytes
isLeft bool // 1 byte + 7 padding
} // Total: 24 bytes
func Of[E, A any](value A) Either[E, A] {
return Right[E](value) // Returns 24-byte struct BY VALUE
}
// Benchmark result: 0 B/op, 0 allocs/op ✓
```
**Why Either doesn't escape:**
1. **Small struct** - At 24 bytes, it's below Go's escape threshold (~64 bytes)
2. **Return by value** - Go returns small structs on the stack
3. **Inlining** - The `//go:inline` directive eliminates function overhead
4. **No pointers** - No pointer escapes in normal usage
**Idiomatic package:**
```go
// Returns native tuple - always stack allocated
func Right[A any](a A) (A, error) {
return a, nil // 16 bytes total (8 + 8)
}
// Benchmark result: 0 B/op, 0 allocs/op ✓
```
**Both achieve zero allocations** - the performance difference comes from other factors like function composition overhead, not from constructor allocations.
### Predicate Operations
| Operation | Either (ns/op) | Idiomatic (ns/op) | Speedup | Either Allocs | Idio Allocs |
|-----------|----------------|-------------------|---------|---------------|-------------|
| IsLeft | 1.45 | **1.35** | **1.1x** ✓ | 0 B/op | 0 B/op |
| IsRight | 1.47 | 1.51 | 1.0x | 0 B/op | 0 B/op |
**Analysis:** Virtually identical performance. The optimizations brought them to parity.
### Fold Operations
| Operation | Either (ns/op) | Idiomatic (ns/op) | Speedup | Either Allocs | Idio Allocs |
|-----------|----------------|-------------------|---------|---------------|-------------|
| MonadFold (Right) | 2.71 | - | - | 0 B/op | - |
| MonadFold (Left) | 2.26 | - | - | 0 B/op | - |
| Fold (Right) | 4.03 | **2.75** | **1.5x** ✓ | 0 B/op | 0 B/op |
| Fold (Left) | 3.69 | **2.40** | **1.5x** ✓ | 0 B/op | 0 B/op |
**Analysis:** Idiomatic package is 1.5x faster for curried Fold operations.
### Unwrap Operations
| Operation | Either (ns/op) | Idiomatic (ns/op) | Note |
|-----------|----------------|-------------------|------|
| Unwrap (Right) | 1.27 | N/A | Either-specific |
| Unwrap (Left) | 1.24 | N/A | Either-specific |
| UnwrapError (Right) | 1.27 | N/A | Either-specific |
| UnwrapError (Left) | 1.27 | N/A | Either-specific |
| ToError (Right) | N/A | 1.40 | Idiomatic-specific |
| ToError (Left) | N/A | 1.84 | Idiomatic-specific |
**Analysis:** Both provide fast unwrapping. Idiomatic's tuple return is naturally unwrapped.
### Map Operations
| Operation | Either (ns/op) | Idiomatic (ns/op) | Speedup | Either Allocs | Idio Allocs |
|-----------|----------------|-------------------|---------|---------------|-------------|
| MonadMap (Right) | 2.96 | - | - | 0 B/op | - |
| MonadMap (Left) | 1.99 | - | - | 0 B/op | - |
| Map (Right) | 5.13 | **4.34** | **1.2x** ✓ | 0 B/op | 0 B/op |
| Map (Left) | 4.19 | **2.48** | **1.7x** ✓ | 0 B/op | 0 B/op |
| MapLeft (Right) | 3.93 | **2.22** | **1.8x** ✓ | 0 B/op | 0 B/op |
| MapLeft (Left) | 7.22 | **3.51** | **2.1x** ✓ | 0 B/op | 0 B/op |
**Analysis:** Idiomatic is consistently faster across all Map variants, especially for error path (Left).
### BiMap Operations
| Operation | Either (ns/op) | Idiomatic (ns/op) | Speedup | Either Allocs | Idio Allocs |
|-----------|----------------|-------------------|---------|---------------|-------------|
| BiMap (Right) | 16.79 | **3.82** | **4.4x** ✓ | 0 B/op | 0 B/op |
| BiMap (Left) | 11.47 | **3.47** | **3.3x** ✓ | 0 B/op | 0 B/op |
**Analysis:** Idiomatic package shows significant advantage for BiMap operations (3-4x faster).
### Chain (Monadic Bind) Operations
| Operation | Either (ns/op) | Idiomatic (ns/op) | Speedup | Either Allocs | Idio Allocs |
|-----------|----------------|-------------------|---------|---------------|-------------|
| MonadChain (Right) | 2.89 | - | - | 0 B/op | - |
| MonadChain (Left) | 2.03 | - | - | 0 B/op | - |
| Chain (Right) | 5.44 | **2.34** | **2.3x** ✓ | 0 B/op | 0 B/op |
| Chain (Left) | 4.44 | **2.53** | **1.8x** ✓ | 0 B/op | 0 B/op |
| ChainFirst (Right) | 87.62 | **2.71** | **32.4x** ✓✓✓ | 72 B, 3 allocs | 0 B, 0 allocs |
| ChainFirst (Left) | 3.94 | **2.48** | **1.6x** ✓ | 0 B/op | 0 B/op |
**Analysis:**
- Idiomatic is 2x faster for standard Chain operations
- **ChainFirst shows the most dramatic difference**: 32.4x faster with zero allocations vs 72 bytes!
### Flatten Operations
| Operation | Either (ns/op) | Idiomatic (ns/op) | Note |
|-----------|----------------|-------------------|------|
| Flatten (Right) | 8.73 | N/A | Either-specific nested structure |
| Flatten (Left) | 8.86 | N/A | Either-specific nested structure |
**Analysis:** Flatten is specific to Either's nested structure handling.
### Applicative Operations
| Operation | Either (ns/op) | Idiomatic (ns/op) | Speedup | Either Allocs | Idio Allocs |
|-----------|----------------|-------------------|---------|---------------|-------------|
| MonadAp (RR) | 3.81 | - | - | 0 B/op | - |
| MonadAp (RL) | 3.07 | - | - | 0 B/op | - |
| MonadAp (LR) | 3.08 | - | - | 0 B/op | - |
| Ap (RR) | 6.99 | - | - | 0 B/op | - |
**Analysis:** MonadAp is fast in Either. Idiomatic package doesn't expose direct Ap benchmarks.
### Alternative Operations
| Operation | Either (ns/op) | Idiomatic (ns/op) | Speedup | Either Allocs | Idio Allocs |
|-----------|----------------|-------------------|---------|---------------|-------------|
| Alt (RR) | 5.72 | **2.40** | **2.4x** ✓ | 0 B/op | 0 B/op |
| Alt (LR) | 4.89 | **2.39** | **2.0x** ✓ | 0 B/op | 0 B/op |
| OrElse (Right) | 5.28 | **2.40** | **2.2x** ✓ | 0 B/op | 0 B/op |
| OrElse (Left) | 3.99 | **2.42** | **1.6x** ✓ | 0 B/op | 0 B/op |
**Analysis:** Idiomatic package is consistently 2x faster for alternative operations.
### GetOrElse Operations
| Operation | Either (ns/op) | Idiomatic (ns/op) | Speedup | Either Allocs | Idio Allocs |
|-----------|----------------|-------------------|---------|---------------|-------------|
| GetOrElse (Right) | 9.01 | **1.49** | **6.1x** ✓✓ | 0 B/op | 0 B/op |
| GetOrElse (Left) | 6.35 | **2.08** | **3.1x** ✓✓ | 0 B/op | 0 B/op |
**Analysis:** Idiomatic package shows dramatic advantage for value extraction (3-6x faster).
### TryCatch Operations
| Operation | Either (ns/op) | Idiomatic (ns/op) | Note |
|-----------|----------------|-------------------|------|
| TryCatch (Success) | 2.39 | N/A | Either-specific |
| TryCatch (Error) | 3.40 | N/A | Either-specific |
| TryCatchError (Success) | 3.32 | N/A | Either-specific |
| TryCatchError (Error) | 6.44 | N/A | Either-specific |
**Analysis:** TryCatch/TryCatchError are Either-specific for wrapping (value, error) tuples.
### Other Operations
| Operation | Either (ns/op) | Idiomatic (ns/op) | Speedup | Either Allocs | Idio Allocs |
|-----------|----------------|-------------------|---------|---------------|-------------|
| Swap (Right) | 2.30 | - | - | 0 B/op | - |
| Swap (Left) | 3.05 | - | - | 0 B/op | - |
| MapTo (Right) | - | 1.60 | - | - | 0 B/op |
| MapTo (Left) | - | 1.73 | - | - | 0 B/op |
| ChainTo (Right) | - | 2.66 | - | - | 0 B/op |
| ChainTo (Left) | - | 2.85 | - | - | 0 B/op |
| Reduce (Right) | - | 2.34 | - | - | 0 B/op |
| Reduce (Left) | - | 1.40 | - | - | 0 B/op |
| Flap (Right) | - | 3.86 | - | - | 0 B/op |
| Flap (Left) | - | 2.58 | - | - | 0 B/op |
### FromPredicate Operations
| Operation | Either (ns/op) | Idiomatic (ns/op) | Speedup | Either Allocs | Idio Allocs |
|-----------|----------------|-------------------|---------|---------------|-------------|
| FromPredicate (Pass) | - | 3.38 | - | - | 0 B/op |
| FromPredicate (Fail) | - | 5.03 | - | - | 0 B/op |
**Analysis:** FromPredicate in idiomatic shows good performance for validation patterns.
### Option Conversion
| Operation | Either (ns/op) | Idiomatic (ns/op) | Speedup | Either Allocs | Idio Allocs |
|-----------|----------------|-------------------|---------|---------------|-------------|
| ToOption (Right) | - | 1.17 | - | - | 0 B/op |
| ToOption (Left) | - | 1.21 | - | - | 0 B/op |
| FromOption (Some) | - | 2.68 | - | - | 0 B/op |
| FromOption (None) | - | 3.72 | - | - | 0 B/op |
**Analysis:** Very fast conversion between Result and Option in idiomatic package.
## Pipeline Benchmarks
These benchmarks measure realistic composition scenarios using F.Pipe.
### Simple Map Pipeline
| Operation | Either (ns/op) | Idiomatic (ns/op) | Speedup | Either Allocs | Idio Allocs |
|-----------|----------------|-------------------|---------|---------------|-------------|
| Pipeline Map (Right) | 112.7 | **46.5** | **2.4x** ✓ | 72 B, 3 allocs | 48 B, 2 allocs |
| Pipeline Map (Left) | 116.8 | **47.2** | **2.5x** ✓ | 72 B, 3 allocs | 48 B, 2 allocs |
### Chain Pipeline
| Operation | Either (ns/op) | Idiomatic (ns/op) | Speedup | Either Allocs | Idio Allocs |
|-----------|----------------|-------------------|---------|---------------|-------------|
| Pipeline Chain (Right) | 74.4 | **26.1** | **2.9x** ✓ | 48 B, 2 allocs | 24 B, 1 allocs |
| Pipeline Chain (Left) | 86.4 | **25.7** | **3.4x** ✓ | 48 B, 2 allocs | 24 B, 1 allocs |
### Complex Pipeline (Map → Chain → Map)
| Operation | Either (ns/op) | Idiomatic (ns/op) | Speedup | Either Allocs | Idio Allocs |
|-----------|----------------|-------------------|---------|---------------|-------------|
| Complex (Right) | 279.8 | **116.3** | **2.4x** ✓ | 192 B, 8 allocs | 120 B, 5 allocs |
| Complex (Left) | 288.1 | **115.8** | **2.5x** ✓ | 192 B, 8 allocs | 120 B, 5 allocs |
**Analysis:**
- Idiomatic package shows **2-3.4x speedup** for realistic pipelines
- Significantly fewer allocations in all pipeline scenarios
- The gap widens as pipelines become more complex
## Array/Collection Operations
### TraverseArray
| Operation | Either (ns/op) | Idiomatic (ns/op) | Note |
|-----------|----------------|-------------------|------|
| TraverseArray (Success) | - | 32.3 | 48 B, 1 alloc |
| TraverseArray (Error) | - | 28.3 | 48 B, 1 alloc |
**Analysis:** Idiomatic package provides efficient array traversal with minimal allocations.
## Validation (ApV)
### ApV Operations
| Operation | Either (ns/op) | Idiomatic (ns/op) | Speedup | Either Allocs | Idio Allocs |
|-----------|----------------|-------------------|---------|---------------|-------------|
| ApV (BothRight) | - | 1.17 | - | - | 0 B/op |
| ApV (BothLeft) | - | 141.5 | - | - | 48 B, 2 allocs |
**Analysis:** Idiomatic's validation applicative shows fast success path, with allocations only when accumulating errors.
## String Formatting
| Operation | Either (ns/op) | Idiomatic (ns/op) | Speedup | Either Allocs | Idio Allocs |
|-----------|----------------|-------------------|---------|---------------|-------------|
| String/ToString (Right) | 139.9 | **81.8** | **1.7x** ✓ | 16 B, 1 alloc | 16 B, 1 alloc |
| String/ToString (Left) | 161.6 | **72.7** | **2.2x** ✓ | 48 B, 1 alloc | 24 B, 1 alloc |
**Analysis:** Idiomatic package formats strings faster with fewer allocations for Left values.
## Do-Notation
| Operation | Either (ns/op) | Idiomatic (ns/op) | Note |
|-----------|----------------|-------------------|------|
| Do | 2.03 | - | Either-specific |
| Bind | 153.4 | - | 96 B, 4 allocs |
| Let | 33.5 | - | 16 B, 1 alloc |
**Analysis:** Do-notation is specific to Either package for monadic composition patterns.
## Summary Statistics
### Simple Operations (< 10 ns/op)
**Either Package:**
- Count: 24 operations
- Average: 3.2 ns/op
- Range: 1.24 - 9.01 ns/op
**Idiomatic Package:**
- Count: 36 operations
- Average: 2.1 ns/op
- Range: 1.17 - 5.03 ns/op
**Winner:** Idiomatic (1.5x faster average)
### Complex Operations (Pipelines, allocations)
**Either Package:**
- Pipeline Map: 112.7 ns/op (72 B, 3 allocs)
- Pipeline Chain: 74.4 ns/op (48 B, 2 allocs)
- Complex: 279.8 ns/op (192 B, 8 allocs)
- ChainFirst: 87.6 ns/op (72 B, 3 allocs)
**Idiomatic Package:**
- Pipeline Map: 46.5 ns/op (48 B, 2 allocs)
- Pipeline Chain: 26.1 ns/op (24 B, 1 allocs)
- Complex: 116.3 ns/op (120 B, 5 allocs)
- ChainFirst: 2.71 ns/op (0 B, 0 allocs)
**Winner:** Idiomatic (2-32x faster, significantly fewer allocations)
### Allocation Analysis
**Either Package:**
- Zero-allocation operations: Most simple operations
- Operations with allocations: Pipelines, Bind, Do-notation, ChainFirst
**Idiomatic Package:**
- Zero-allocation operations: Almost all operations except pipelines and validation
- Significantly fewer allocations in pipeline scenarios
- ChainFirst: **Zero allocations** (vs 72 B in Either)
## Performance Characteristics
### Where Either Package Excels
1. **Comparable to Idiomatic**: After optimizations, Either matches Idiomatic for constructors and predicates
2. **Feature Richness**: More operations (Do-notation, Bind, Let, Flatten, Swap)
3. **Type Flexibility**: Full Either[E, A] with custom error types
### Where Idiomatic Package Excels
1. **Core Operations**: 1.2-2.3x faster for Map, Chain, Fold
2. **Complex Operations**: 32x faster for ChainFirst
3. **Pipelines**: 2-3.4x faster with fewer allocations
4. **Extraction**: 3-6x faster for GetOrElse
5. **Alternative**: 2x faster for Alt/OrElse
6. **BiMap**: 3-4x faster
7. **Consistency**: More predictable performance profile
## Real-World Performance Impact
### Hot Path Example (1 million operations)
```go
// Map operation (very common)
// Either: 5.13 ns/op × 1M = 5.13 ms
// Idiomatic: 4.34 ns/op × 1M = 4.34 ms
// Savings: 0.79 ms per million operations
// Chain operation (common in pipelines)
// Either: 5.44 ns/op × 1M = 5.44 ms
// Idiomatic: 2.34 ns/op × 1M = 2.34 ms
// Savings: 3.10 ms per million operations
// Pipeline Complex (realistic composition)
// Either: 279.8 ns/op × 1M = 279.8 ms
// Idiomatic: 116.3 ns/op × 1M = 116.3 ms
// Savings: 163.5 ms per million operations
```
### Memory Impact
For 1 million ChainFirst operations:
- Either: 72 MB allocated
- Idiomatic: 0 MB allocated
- **Savings: 72 MB + reduced GC pressure**
## Recommendations
### Use Idiomatic Package When:
1. **Performance is Critical**
- Hot paths in your application
- High-throughput services (>10k req/s)
- Complex operation chains
- Memory-constrained environments
2. **Natural Go Integration**
- Working with stdlib (value, error) patterns
- Team familiar with Go idioms
- Simple migration from existing code
- Want zero-cost abstractions
3. **Pipeline-Heavy Code**
- 2-3.4x faster pipelines
- Significantly fewer allocations
- Better CPU cache utilization
### Use Either Package When:
1. **Feature Requirements**
- Need custom error types (Either[E, A])
- Using Do-notation for complex compositions
- Need Flatten, Swap, or other Either-specific operations
- Porting from FP languages (Scala, Haskell)
2. **Type Safety Over Performance**
- Explicit Either semantics
- Algebraic data type guarantees
- Teaching/learning FP concepts
3. **Moderate Performance Needs**
- After optimizations, Either is quite fast
- Difference matters only at high scale
- Code clarity > micro-optimizations
### Hybrid Approach
```go
// Use Either for complex type safety
import "github.com/IBM/fp-go/v2/either"
type ValidationError struct { Field, Message string }
validated := either.Either[ValidationError, Input]{...}
// Convert to Idiomatic for hot path
import "github.com/IBM/fp-go/v2/idiomatic/result"
value, err := either.UnwrapError(either.MapLeft(toError)(validated))
processed, err := result.Chain(hotPathProcessing)(value, err)
```
## Conclusion
After optimizations to the Either package:
1. **Both packages achieve zero heap allocations for constructors** - The Either struct (24 bytes) does NOT escape to heap
2. **Simple operations** are now **comparable** between both packages (~1-2 ns/op, 0 B/op)
3. **Core transformations** favor Idiomatic by **1.2-2.3x**
4. **Complex operations** heavily favor Idiomatic by **2-32x**
5. **Memory efficiency** strongly favors Idiomatic (especially ChainFirst: 72 B → 0 B)
6. **Real-world pipelines** show **2-3.4x speedup** with Idiomatic
### Key Insight: No Heap Escape Myth
A critical finding: **Both packages avoid heap allocations for simple operations.** The Either struct is small enough (24 bytes) that Go returns it by value on the stack, not the heap. The `0 B/op, 0 allocs/op` benchmarks confirm this.
The performance differences come from:
- **Function composition overhead** in complex operations
- **Currying and closure creation** in pipelines
- **Tuple simplicity** vs struct field access
Not from constructor allocations—both are equally efficient there.
### Final Verdict
The idiomatic package provides a compelling performance advantage for production workloads while maintaining zero-cost functional programming abstractions. The Either package remains excellent for type safety, feature richness, and scenarios where explicit Either[E, A] semantics are valuable.
**Bottom Line:**
- For **high-performance Go services**: idiomatic package is the clear winner (1.2-32x faster)
- For **type-safe, feature-rich FP**: Either package is excellent (comparable simple ops, more features)
- **Both avoid heap allocations** for constructors—choose based on your performance vs features trade-off

View File

@@ -0,0 +1,344 @@
# Deep Chaining Performance Analysis
## Executive Summary
The **only remaining performance gap** between `v2/option` and `idiomatic/option` is in **deep chaining operations** (multiple sequential transformations). This document demonstrates the problem, explains the root cause, and provides recommendations.
## Benchmark Results
### v2/option (Struct-based)
```
BenchmarkChain_3Steps 8.17 ns/op 0 allocs
BenchmarkChain_5Steps 16.57 ns/op 0 allocs
BenchmarkChain_10Steps 47.01 ns/op 0 allocs
BenchmarkMap_5Steps 0.28 ns/op 0 allocs ⚡
```
### idiomatic/option (Tuple-based)
```
BenchmarkChain_3Steps 0.22 ns/op 0 allocs ⚡
BenchmarkChain_5Steps 0.22 ns/op 0 allocs ⚡
BenchmarkChain_10Steps 0.21 ns/op 0 allocs ⚡
BenchmarkMap_5Steps 0.22 ns/op 0 allocs ⚡
```
### Performance Comparison
| Steps | v2/option | idiomatic/option | Slowdown |
|-------|-----------|------------------|----------|
| 3 | 8.17 ns | 0.22 ns | **37x slower** |
| 5 | 16.57 ns | 0.22 ns | **75x slower** |
| 10 | 47.01 ns | 0.21 ns | **224x slower** |
**Key Finding**: The performance gap **increases linearly** with chain depth!
---
## Visual Example: The Problem
### Scenario: Processing User Input
```go
// Process user input through multiple validation steps
input := "42"
// v2/option - Nested MonadChain
result := MonadChain(
MonadChain(
MonadChain(
Some(input),
validateNotEmpty, // Step 1
),
parseToInt, // Step 2
),
validateRange, // Step 3
)
```
### What Happens Under the Hood
#### v2/option (Struct Construction Overhead)
```go
// Step 0: Initial value
Some(input)
// Creates: Option[string]{value: "42", isSome: true}
// Memory: HEAP allocation
// Step 1: Validate not empty
MonadChain(opt, validateNotEmpty)
// Input: Option[string]{value: "42", isSome: true} ← Read from heap
// Output: Option[string]{value: "42", isSome: true} ← NEW heap allocation
// Memory: 2 heap allocations
// Step 2: Parse to int
MonadChain(opt, parseToInt)
// Input: Option[string]{value: "42", isSome: true} ← Read from heap
// Output: Option[int]{value: 42, isSome: true} ← NEW heap allocation
// Memory: 3 heap allocations
// Step 3: Validate range
MonadChain(opt, validateRange)
// Input: Option[int]{value: 42, isSome: true} ← Read from heap
// Output: Option[int]{value: 42, isSome: true} ← NEW heap allocation
// Memory: 4 heap allocations TOTAL
// Each step:
// 1. Reads Option struct from memory
// 2. Checks isSome field
// 3. Calls function
// 4. Creates NEW Option struct
// 5. Writes to memory
```
#### idiomatic/option (Zero Allocation)
```go
// Step 0: Initial value
s, ok := Some(input)
// Creates: ("42", true)
// Memory: STACK only (registers)
// Step 1: Validate not empty
v1, ok1 := Chain(validateNotEmpty)(s, ok)
// Input: ("42", true) ← Values in registers
// Output: ("42", true) ← Values in registers
// Memory: ZERO allocations
// Step 2: Parse to int
v2, ok2 := Chain(parseToInt)(v1, ok1)
// Input: ("42", true) ← Values in registers
// Output: (42, true) ← Values in registers
// Memory: ZERO allocations
// Step 3: Validate range
v3, ok3 := Chain(validateRange)(v2, ok2)
// Input: (42, true) ← Values in registers
// Output: (42, true) ← Values in registers
// Memory: ZERO allocations TOTAL
// Each step:
// 1. Reads values from registers (no memory access!)
// 2. Checks bool flag
// 3. Calls function
// 4. Returns new tuple (stays in registers)
// 5. Compiler optimizes everything away!
```
---
## Assembly-Level Difference
### v2/option - Struct Overhead
```asm
; Every chain step does:
MOV RAX, [heap_ptr] ; Load struct from heap
TEST BYTE [RAX+8], 1 ; Check isSome field
JZ none_case ; Branch if None
MOV RDI, [RAX] ; Load value from struct
CALL transform_func ; Call the function
CALL malloc ; Allocate new struct ⚠️
MOV [new_ptr], result ; Store result
MOV [new_ptr+8], 1 ; Set isSome = true
```
### idiomatic/option - Optimized Away
```asm
; All steps compiled to:
MOV EAX, 42 ; The final result!
; Everything else optimized away! ⚡
```
**Compiler insight**: With tuples, the Go compiler can:
1. **Inline everything** - No function call overhead
2. **Eliminate branches** - Constant propagation removes `if ok` checks
3. **Use registers only** - Values never touch memory
4. **Dead code elimination** - Removes unnecessary operations
---
## Real-World Example with Timings
### Example: User Registration Validation Chain
```go
// Validate: email → trim → lowercase → check format → check uniqueness
```
#### v2/option Performance
```go
func ValidateEmail_v2(email string) Option[string] {
return MonadChain(
MonadChain(
MonadChain(
MonadChain(
Some(email),
trimWhitespace, // ~2 ns
),
toLowerCase, // ~2 ns
),
validateFormat, // ~2 ns
),
checkUniqueness, // ~2 ns
)
}
// Total: ~8-16 ns (matches our 5-step benchmark: 16.57 ns)
```
#### idiomatic/option Performance
```go
func ValidateEmail_idiomatic(email string) (string, bool) {
v1, ok1 := Chain(trimWhitespace)(email, true)
v2, ok2 := Chain(toLowerCase)(v1, ok1)
v3, ok3 := Chain(validateFormat)(v2, ok2)
return Chain(checkUniqueness)(v3, ok3)
}
// Total: ~0.22 ns (entire chain optimized to single operation!)
```
**Impact**: For 1 million validations:
- v2/option: 16.57 ms
- idiomatic/option: 0.22 ms
- **Difference: 75x faster = saved 16.35 ms**
---
## Why Map is Fast in v2/option
Interestingly, `Map` (pure transformations) is **much faster** than `Chain`:
```
v2/option:
- BenchmarkChain_5Steps: 16.57 ns
- BenchmarkMap_5Steps: 0.28 ns ← 59x FASTER!
```
**Reason**: Map transformations can be **inlined and fused** by the compiler:
```go
// This:
Map(f5)(Map(f4)(Map(f3)(Map(f2)(Map(f1)(opt)))))
// Becomes (after compiler optimization):
Some(f5(f4(f3(f2(f1(value)))))) // Single struct construction!
// While Chain cannot be optimized the same way:
MonadChain(MonadChain(...)) // Must construct at each step
```
---
## When Does This Matter?
### ⚠️ **Rarely Critical** (99% of use cases)
Even 10-step chains only cost **47 nanoseconds**. For context:
- Database query: **~1,000,000 ns** (1 ms)
- HTTP request: **~10,000,000 ns** (10 ms)
- File I/O: **~100,000 ns** (0.1 ms)
**The 47 ns overhead is negligible compared to real I/O operations.**
### ⚡ **Can Matter** (High-throughput scenarios)
1. **In-memory data processing pipelines**
```go
// Processing 10 million records with 5-step validation
v2/option: 165 ms
idiomatic/option: 2 ms
Difference: 163 ms saved ⚡
```
2. **Real-time stream processing**
- Processing 100k events/second with chained transformations
- 16.57 ns × 100,000 = 1.66 ms vs 0.22 ns × 100,000 = 0.022 ms
- Can affect throughput for high-frequency trading, gaming, etc.
3. **Tight inner loops with chained logic**
```go
for i := 0; i < 1_000_000; i++ {
result := Chain(f1).Chain(f2).Chain(f3).Chain(f4)(data[i])
}
// v2/option: 16 ms
// idiomatic: 0.22 ms
```
---
## Root Cause Summary
| Aspect | v2/option | idiomatic/option | Why? |
|--------|-----------|------------------|------|
| **Intermediate values** | `Option[T]` struct | `(T, bool)` tuple | Struct requires memory, tuple can use registers |
| **Memory allocation** | 1 per step | 0 total | Heap vs stack |
| **Compiler optimization** | Limited | Aggressive | Structs block inlining |
| **Cache impact** | Heap reads | Register-only | Memory bandwidth saved |
| **Branch prediction** | Struct checks | Optimized away | Compiler removes branches |
---
## Recommendations
### ✅ **Use v2/option When:**
- I/O-bound operations (database, network, files)
- User-facing applications (latency dominated by I/O)
- Need JSON marshaling, TryCatch, SequenceArray
- Chain depth < 5 steps (overhead < 20 ns - negligible)
- Code clarity > microsecond performance
### ✅ **Use idiomatic/option When:**
- CPU-bound data processing
- High-throughput stream processing
- Tight inner loops with chaining
- In-memory analytics
- Performance-critical paths
- Chain depth > 5 steps
### ✅ **Mitigation for v2/option:**
If you need v2/option but want better chain performance:
1. **Use Map instead of Chain** when possible:
```go
// Bad (16.57 ns):
MonadChain(MonadChain(MonadChain(opt, f1), f2), f3)
// Good (0.28 ns):
Map(f3)(Map(f2)(Map(f1)(opt)))
```
2. **Batch operations**:
```go
// Instead of chaining many steps:
validate := func(x T) Option[T] {
// Combine multiple checks in one function
if check1(x) && check2(x) && check3(x) {
return Some(transform(x))
}
return None[T]()
}
```
3. **Profile first**:
- Only optimize hot paths
- 47 ns is often acceptable
- Don't premature optimize
---
## Conclusion
**The deep chaining performance gap is:**
- ✅ **Real and measurable** (37-224x slower)
- ✅ **Well understood** (struct construction overhead)
- ⚠️ **Rarely critical** (nanosecond differences usually don't matter)
- ✅ **Easy to work around** (use Map, batch operations)
- ✅ **Worth it for the API benefits** (JSON, methods, helpers)
**For 99% of applications, v2/option's performance is excellent.** The gap only matters in specialized high-throughput scenarios where you should probably use idiomatic/option anyway.
The optimizations already applied (`//go:inline`, direct field access) brought v2/option to **competitive parity** for all practical purposes. The remaining gap is a **fundamental design trade-off**, not a fixable bug.

574
v2/DESIGN.md Normal file
View File

@@ -0,0 +1,574 @@
# Design Decisions
This document explains the key design decisions and principles behind fp-go's API design.
## Table of Contents
- [Data Last Principle](#data-last-principle)
- [Kleisli and Operator Types](#kleisli-and-operator-types)
- [Monadic Operations Comparison](#monadic-operations-comparison)
- [Type Parameter Ordering](#type-parameter-ordering)
- [Generic Type Aliases](#generic-type-aliases)
## Data Last Principle
fp-go follows the **"data last"** principle, where the data being operated on is always the last parameter in a function. This design choice enables powerful function composition and partial application patterns.
### What is "Data Last"?
In the "data last" style, functions are structured so that:
1. Configuration parameters come first
2. The data to be transformed comes last
This is the opposite of the traditional object-oriented style where the data (receiver) comes first.
### Why "Data Last"?
The "data last" principle enables:
1. **Natural Currying**: Functions can be partially applied to create specialized transformations
2. **Function Composition**: Operations can be composed before applying them to data
3. **Point-Free Style**: Write transformations without explicitly mentioning the data
4. **Reusability**: Create reusable transformation pipelines
### Examples
#### Basic Transformation
```go
// Data last style (fp-go)
double := array.Map(number.Mul(2))
result := double([]int{1, 2, 3}) // [2, 4, 6]
// Compare with data first style (traditional)
result := array.Map([]int{1, 2, 3}, number.Mul(2))
```
#### Function Composition
```go
import (
A "github.com/IBM/fp-go/v2/array"
F "github.com/IBM/fp-go/v2/function"
N "github.com/IBM/fp-go/v2/number"
)
// Create a pipeline of transformations
pipeline := F.Flow3(
A.Filter(N.MoreThan(0)), // Keep positive numbers
A.Map(N.Mul(2)), // Double each number
A.Reduce(func(acc, x int) int { return acc + x }, 0), // Sum them up
)
// Apply the pipeline to different data
result1 := pipeline([]int{-1, 2, 3, -4, 5}) // (2 + 3 + 5) * 2 = 20
result2 := pipeline([]int{1, 2, 3}) // (1 + 2 + 3) * 2 = 12
```
#### Partial Application
```go
import (
O "github.com/IBM/fp-go/v2/option"
)
// Create specialized functions by partial application
getOrZero := O.GetOrElse(func() int { return 0 })
getOrEmpty := O.GetOrElse(func() string { return "" })
// Use them with different data
value1 := getOrZero(O.Some(42)) // 42
value2 := getOrZero(O.None[int]()) // 0
text1 := getOrEmpty(O.Some("hello")) // "hello"
text2 := getOrEmpty(O.None[string]()) // ""
```
#### Building Reusable Transformations
```go
import (
E "github.com/IBM/fp-go/v2/either"
O "github.com/IBM/fp-go/v2/option"
)
// Create a reusable validation pipeline
type User struct {
Name string
Email string
Age int
}
validateAge := E.FromPredicate(
func(u User) bool { return u.Age >= 18 },
func(u User) error { return errors.New("must be 18 or older") },
)
validateEmail := E.FromPredicate(
func(u User) bool { return strings.Contains(u.Email, "@") },
func(u User) error { return errors.New("invalid email") },
)
// Compose validators
validateUser := F.Flow2(
validateAge,
E.Chain(validateEmail),
)
// Apply to different users
result1 := validateUser(User{Name: "Alice", Email: "alice@example.com", Age: 25})
result2 := validateUser(User{Name: "Bob", Email: "invalid", Age: 30})
```
#### Monadic Operations
```go
import (
O "github.com/IBM/fp-go/v2/option"
)
// Data last enables clean monadic chains
parseAndDouble := F.Flow2(
O.FromPredicate(func(s string) bool { return s != "" }),
O.Chain(func(s string) O.Option[int] {
n, err := strconv.Atoi(s)
if err != nil {
return O.None[int]()
}
return O.Some(n * 2)
}),
)
result1 := parseAndDouble("21") // Some(42)
result2 := parseAndDouble("") // None
result3 := parseAndDouble("abc") // None
```
### Monadic vs Non-Monadic Forms
fp-go provides two forms for most operations:
1. **Curried form** (data last): Returns a function that can be composed
2. **Monadic form** (data first): Takes all parameters at once
```go
// Curried form - data last, returns a function
Map[A, B any](f func(A) B) func(Option[A]) Option[B]
// Monadic form - data first, direct execution
MonadMap[A, B any](fa Option[A], f func(A) B) Option[B]
```
**When to use each:**
- **Curried form**: When building pipelines, composing functions, or creating reusable transformations
- **Monadic form**: When you have all parameters available and want direct execution
```go
// Curried form - building a pipeline
transform := F.Flow3(
O.Map(strings.ToUpper),
O.Filter(func(s string) bool { return len(s) > 3 }),
O.GetOrElse(func() string { return "DEFAULT" }),
)
result := transform(O.Some("hello"))
// Monadic form - direct execution
result := O.MonadMap(O.Some("hello"), strings.ToUpper)
```
### Further Reading on Data-Last Pattern
The data-last currying pattern is well-documented in the functional programming community:
- [Mostly Adequate Guide - Ch. 4: Currying](https://mostly-adequate.gitbook.io/mostly-adequate-guide/ch04) - Excellent introduction with clear examples
- [Curry and Function Composition](https://medium.com/javascript-scene/curry-and-function-composition-2c208d774983) by Eric Elliott
- [fp-ts Issue #1238](https://github.com/gcanti/fp-ts/issues/1238) - Real-world examples of data-last refactoring
## Kleisli and Operator Types
fp-go uses consistent type aliases across all monads to make code more recognizable and composable. These types provide a common vocabulary that works across different monadic contexts.
### Type Definitions
```go
// Kleisli arrow - a function that returns a monadic value
type Kleisli[A, B any] = func(A) M[B]
// Operator - a function that transforms a monadic value
type Operator[A, B any] = func(M[A]) M[B]
```
Where `M` represents the specific monad (Option, Either, IO, etc.).
### Why These Types Matter
1. **Consistency**: The same type names appear across all monads
2. **Recognizability**: Experienced functional programmers immediately understand the intent
3. **Composability**: Functions with these types compose naturally
4. **Documentation**: Type signatures clearly communicate the operation's behavior
### Examples Across Monads
#### Option Monad
```go
// option/option.go
type Kleisli[A, B any] = func(A) Option[B]
type Operator[A, B any] = func(Option[A]) Option[B]
// Chain uses Kleisli
func Chain[A, B any](f Kleisli[A, B]) Operator[A, B]
// Map returns an Operator
func Map[A, B any](f func(A) B) Operator[A, B]
```
#### Either Monad
```go
// either/either.go
type Kleisli[E, A, B any] = func(A) Either[E, B]
type Operator[E, A, B any] = func(Either[E, A]) Either[E, B]
// Chain uses Kleisli
func Chain[E, A, B any](f Kleisli[E, A, B]) Operator[E, A, B]
// Map returns an Operator
func Map[E, A, B any](f func(A) B) Operator[E, A, B]
```
#### IO Monad
```go
// io/io.go
type Kleisli[A, B any] = func(A) IO[B]
type Operator[A, B any] = func(IO[A]) IO[B]
// Chain uses Kleisli
func Chain[A, B any](f Kleisli[A, B]) Operator[A, B]
// Map returns an Operator
func Map[A, B any](f func(A) B) Operator[A, B]
```
#### Array (List Monad)
```go
// array/array.go
type Kleisli[A, B any] = func(A) []B
type Operator[A, B any] = func([]A) []B
// Chain uses Kleisli
func Chain[A, B any](f Kleisli[A, B]) Operator[A, B]
// Map returns an Operator
func Map[A, B any](f func(A) B) Operator[A, B]
```
### Pattern Recognition
Once you learn these patterns in one monad, you can apply them to all monads:
```go
// The pattern is always the same, just the monad changes
// Option
validateAge := option.Chain(func(user User) option.Option[User] {
if user.Age >= 18 {
return option.Some(user)
}
return option.None[User]()
})
// Either
validateAge := either.Chain(func(user User) either.Either[error, User] {
if user.Age >= 18 {
return either.Right[error](user)
}
return either.Left[User](errors.New("too young"))
})
// IO
validateAge := io.Chain(func(user User) io.IO[User] {
return io.Of(user) // Always succeeds in IO
})
// Array
validateAge := array.Chain(func(user User) []User {
if user.Age >= 18 {
return []User{user}
}
return []User{} // Empty array = failure
})
```
### Composing Kleisli Arrows
Kleisli arrows compose naturally using monadic composition:
```go
import (
O "github.com/IBM/fp-go/v2/option"
F "github.com/IBM/fp-go/v2/function"
)
// Define Kleisli arrows
parseAge := func(s string) O.Option[int] {
n, err := strconv.Atoi(s)
if err != nil {
return O.None[int]()
}
return O.Some(n)
}
validateAge := func(age int) O.Option[int] {
if age >= 18 {
return O.Some(age)
}
return O.None[int]()
}
formatAge := func(age int) O.Option[string] {
return O.Some(fmt.Sprintf("Age: %d", age))
}
// Compose them using Flow and Chain
pipeline := F.Flow3(
parseAge,
O.Chain(validateAge),
O.Chain(formatAge),
)
result := pipeline("25") // Some("Age: 25")
result := pipeline("15") // None (too young)
result := pipeline("abc") // None (parse error)
```
### Building Reusable Operators
Operators can be created once and reused across your codebase:
```go
import (
E "github.com/IBM/fp-go/v2/either"
)
// Create reusable operators
type ValidationError struct {
Field string
Message string
}
// Reusable validation operators
validateNonEmpty := E.Chain(func(s string) E.Either[ValidationError, string] {
if s == "" {
return E.Left[string](ValidationError{
Field: "input",
Message: "cannot be empty",
})
}
return E.Right[ValidationError](s)
})
validateEmail := E.Chain(func(s string) E.Either[ValidationError, string] {
if !strings.Contains(s, "@") {
return E.Left[string](ValidationError{
Field: "email",
Message: "invalid format",
})
}
return E.Right[ValidationError](s)
})
// Compose operators
validateEmailInput := F.Flow2(
validateNonEmpty,
validateEmail,
)
// Use across your application
result1 := validateEmailInput(E.Right[ValidationError]("user@example.com"))
result2 := validateEmailInput(E.Right[ValidationError](""))
result3 := validateEmailInput(E.Right[ValidationError]("invalid"))
```
### Benefits of Consistent Naming
1. **Cross-monad understanding**: Learn once, apply everywhere
2. **Easier refactoring**: Changing monads requires minimal code changes
3. **Better tooling**: IDEs can provide better suggestions
4. **Team communication**: Shared vocabulary across the team
5. **Library integration**: Third-party libraries follow the same patterns
### Identity Monad - The Simplest Case
The Identity monad shows these types in their simplest form:
```go
// identity/doc.go
type Operator[A, B any] = func(A) B
// In Identity, there's no wrapping, so:
// - Kleisli[A, B] is just func(A) B
// - Operator[A, B] is just func(A) B
// They're the same because Identity adds no context
```
This demonstrates that these type aliases represent fundamental functional programming concepts, not just arbitrary naming conventions.
## Monadic Operations Comparison
fp-go's monadic operations are inspired by functional programming languages and libraries. Here's how they compare:
| fp-go | fp-ts | Haskell | Scala | Description |
|-------|-------|---------|-------|-------------|
| `Map` | `map` | `fmap` | `map` | Functor mapping - transforms the value inside a context |
| `Chain` | `chain` | `>>=` (bind) | `flatMap` | Monadic bind - chains computations that return wrapped values |
| `Ap` | `ap` | `<*>` | `ap` | Applicative apply - applies a wrapped function to a wrapped value |
| `Of` | `of` | `return`/`pure` | `pure` | Lifts a pure value into a monadic context |
| `Fold` | `fold` | `either` | `fold` | Eliminates the context by providing handlers for each case |
| `Filter` | `filter` | `mfilter` | `filter` | Keeps values that satisfy a predicate |
| `Flatten` | `flatten` | `join` | `flatten` | Removes one level of nesting |
| `ChainFirst` | `chainFirst` | `>>` (then) | `tap` | Chains for side effects, keeping the original value |
| `Alt` | `alt` | `<\|>` | `orElse` | Provides an alternative value if the first fails |
| `GetOrElse` | `getOrElse` | `fromMaybe` | `getOrElse` | Extracts the value or provides a default |
| `FromPredicate` | `fromPredicate` | `guard` | `filter` | Creates a monadic value based on a predicate |
| `Sequence` | `sequence` | `sequence` | `sequence` | Transforms a collection of effects into an effect of a collection |
| `Traverse` | `traverse` | `traverse` | `traverse` | Maps and sequences in one operation |
| `Reduce` | `reduce` | `foldl` | `foldLeft` | Folds a structure from left to right |
| `ReduceRight` | `reduceRight` | `foldr` | `foldRight` | Folds a structure from right to left |
### Key Differences from Other Languages
#### Naming Conventions
- **Go conventions**: fp-go uses PascalCase for exported functions (e.g., `Map`, `Chain`) following Go's naming conventions
- **Type parameters first**: Non-inferrable type parameters come first (e.g., `Ap[B, E, A any]`)
- **Monadic prefix**: Direct execution forms use the `Monad` prefix (e.g., `MonadMap`, `MonadChain`)
#### Type System
```go
// fp-go (explicit type parameters when needed)
result := option.Map(transform)(value)
result := option.Map[string, int](transform)(value) // explicit when inference fails
// Haskell (type inference)
result = fmap transform value
// Scala (type inference with method syntax)
result = value.map(transform)
// fp-ts (TypeScript type inference)
const result = pipe(value, map(transform))
```
#### Currying
```go
// fp-go - explicit currying with data last
double := array.Map(number.Mul(2))
result := double(numbers)
// Haskell - automatic currying
double = fmap (*2)
result = double numbers
// Scala - method syntax
result = numbers.map(_ * 2)
```
## Type Parameter Ordering
fp-go v2 uses a specific ordering for type parameters to maximize type inference:
### Rule: Non-Inferrable Parameters First
Type parameters that **cannot be inferred** from function arguments come first. This allows the Go compiler to infer as many types as possible.
```go
// Ap - B cannot be inferred from arguments, so it comes first
func Ap[B, E, A any](fa Either[E, A]) func(Either[E, func(A) B]) Either[E, B]
// Usage - only B needs to be specified
result := either.Ap[string](value)(funcInEither)
```
### Examples
```go
// Map - all types can be inferred from arguments
func Map[E, A, B any](f func(A) B) func(Either[E, A]) Either[E, B]
// Usage - no type parameters needed
result := either.Map(transform)(value)
// Chain - all types can be inferred
func Chain[E, A, B any](f func(A) Either[E, B]) func(Either[E, A]) Either[E, B]
// Usage - no type parameters needed
result := either.Chain(validator)(value)
// Of - E cannot be inferred, comes first
func Of[E, A any](value A) Either[E, A]
// Usage - only E needs to be specified
result := either.Of[error](42)
```
### Benefits
1. **Less verbose code**: Most operations don't require explicit type parameters
2. **Better IDE support**: Type inference provides better autocomplete
3. **Clearer intent**: Only specify types that can't be inferred
## Generic Type Aliases
fp-go v2 leverages Go 1.24's generic type aliases for cleaner type definitions:
```go
// V2 - using generic type alias (requires Go 1.24+)
type ReaderIOEither[R, E, A any] = RD.Reader[R, IOE.IOEither[E, A]]
// V1 - using type definition (Go 1.18+)
type ReaderIOEither[R, E, A any] RD.Reader[R, IOE.IOEither[E, A]]
```
### Benefits
1. **True aliases**: The type is interchangeable with its definition
2. **No namespace imports needed**: Can use types directly without package prefixes
3. **Simpler codebase**: Eliminates the need for `generic` subpackages
4. **Better composability**: Types compose more naturally
### Migration Pattern
```go
// Define project-wide aliases once
package types
import (
"github.com/IBM/fp-go/v2/option"
"github.com/IBM/fp-go/v2/result"
"github.com/IBM/fp-go/v2/ioresult"
)
type Option[A any] = option.Option[A]
type Result[A any] = result.Result[A]
type IOResult[A any] = ioresult.IOResult[A]
// Use throughout your codebase
package myapp
import "myproject/types"
func process(input string) types.Result[types.Option[int]] {
// implementation
}
```
---
For more information, see:
- [README.md](./README.md) - Overview and quick start
- [API Documentation](https://pkg.go.dev/github.com/IBM/fp-go/v2) - Complete API reference
- [Samples](./samples/) - Practical examples

View File

@@ -0,0 +1,212 @@
# Example Tests Progress
This document tracks the progress of converting documentation examples into executable example test files.
## Overview
The codebase has 300+ documentation examples across many packages. This document tracks which packages have been completed and which still need work.
## Completed Packages
### Core Packages
- [x] **result** - Created `examples_bind_test.go`, `examples_curry_test.go`, `examples_apply_test.go`
- Files: `bind.go` (10 examples), `curry.go` (5 examples), `apply.go` (2 examples)
- Status: ✅ 17 tests passing
### Utility Packages
- [x] **pair** - Created `examples_test.go`
- Files: `pair.go` (14 examples)
- Status: ✅ 14 tests passing
- [x] **tuple** - Created `examples_test.go`
- Files: `tuple.go` (6 examples)
- Status: ✅ 6 tests passing
### Type Class Packages
- [x] **semigroup** - Created `examples_test.go`
- Files: `semigroup.go` (7 examples)
- Status: ✅ 7 tests passing
### Utility Packages (continued)
- [x] **predicate** - Created `examples_test.go`
- Files: `bool.go` (3 examples), `contramap.go` (1 example)
- Status: ✅ 4 tests passing
### Context Reader Packages
- [x] **idiomatic/context/readerresult** - Created `examples_reader_test.go`, `examples_bind_test.go`
- Files: `reader.go` (8 examples), `bind.go` (14 examples)
- Status: ✅ 22 tests passing
## Summary Statistics
- **Total Example Tests Created**: 74
- **Total Packages Completed**: 7 (result, pair, tuple, semigroup, predicate, idiomatic/context/readerresult)
- **All Tests Status**: ✅ PASSING
### Breakdown by Package
- **result**: 21 tests (bind: 10, curry: 5, apply: 2, array: 4)
- **pair**: 14 tests
- **tuple**: 6 tests
- **semigroup**: 7 tests
- **predicate**: 4 tests
- **idiomatic/context/readerresult**: 22 tests (reader: 8, bind: 14)
## Packages with Existing Examples
These packages already have some example test files:
- result (has `examples_create_test.go`, `examples_extract_test.go`)
- option (has `examples_create_test.go`, `examples_extract_test.go`)
- either (has `examples_create_test.go`, `examples_extract_test.go`)
- ioeither (has `examples_create_test.go`, `examples_do_test.go`, `examples_extract_test.go`)
- ioresult (has `examples_create_test.go`, `examples_do_test.go`, `examples_extract_test.go`)
- lazy (has `example_lazy_test.go`)
- array (has `examples_basic_test.go`, `examples_sort_test.go`, `example_any_test.go`, `example_find_test.go`)
- readerioeither (has `traverse_example_test.go`)
- context/readerioresult (has `flip_example_test.go`)
## Packages Needing Example Tests
### Core Packages (High Priority)
- [ ] **result** - Additional files need examples:
- `apply.go` (2 examples)
- `array.go` (7 examples)
- `core.go` (6 examples)
- `either.go` (26 examples)
- `eq.go` (2 examples)
- `functor.go` (1 example)
- [ ] **option** - Additional files need examples
- [ ] **either** - Additional files need examples
### Reader Packages (High Priority)
- [ ] **reader** - Many examples in:
- `array.go` (12 examples)
- `bind.go` (10 examples)
- `curry.go` (8 examples)
- `flip.go` (2 examples)
- `reader.go` (21 examples)
- [ ] **readeroption** - Examples in:
- `array.go` (3 examples)
- `bind.go` (7 examples)
- `curry.go` (5 examples)
- `flip.go` (2 examples)
- `from.go` (4 examples)
- `reader.go` (18 examples)
- `sequence.go` (4 examples)
- [ ] **readerresult** - Examples in:
- `array.go` (3 examples)
- `bind.go` (24 examples)
- `curry.go` (7 examples)
- `flip.go` (2 examples)
- `from.go` (4 examples)
- `monoid.go` (3 examples)
- [ ] **readereither** - Examples in:
- `array.go` (3 examples)
- `bind.go` (7 examples)
- `flip.go` (3 examples)
- [ ] **readerio** - Examples in:
- `array.go` (3 examples)
- `bind.go` (7 examples)
- `flip.go` (2 examples)
- `logging.go` (4 examples)
- `reader.go` (30 examples)
- [ ] **readerioeither** - Examples in:
- `bind.go` (7 examples)
- `flip.go` (1 example)
- [ ] **readerioresult** - Examples in:
- `array.go` (8 examples)
- `bind.go` (24 examples)
### State Packages
- [ ] **statereaderioeither** - Examples in:
- `bind.go` (5 examples)
- `resource.go` (1 example)
- `state.go` (13 examples)
### Utility Packages
- [ ] **lazy** - Additional examples in:
- `apply.go` (2 examples)
- `bind.go` (7 examples)
- `lazy.go` (10 examples)
- `sequence.go` (4 examples)
- `traverse.go` (2 examples)
- [ ] **pair** - Additional examples in:
- `monad.go` (12 examples)
- `pair.go` (remaining ~20 examples)
- [ ] **tuple** - Examples in:
- `tuple.go` (6 examples)
- [ ] **predicate** - Examples in:
- `bool.go` (3 examples)
- `contramap.go` (1 example)
- `monoid.go` (4 examples)
- [ ] **retry** - Examples in:
- `retry.go` (7 examples)
- [ ] **logging** - Examples in:
- `logger.go` (5 examples)
### Collection Packages
- [ ] **record** - Examples in:
- `bind.go` (3 examples)
### Type Class Packages
- [ ] **semigroup** - Examples in:
- `alt.go` (1 example)
- `apply.go` (1 example)
- `array.go` (4 examples)
- `semigroup.go` (7 examples)
- [ ] **ord** - Examples in:
- `ord.go` (1 example)
## Strategy for Completion
1. **Prioritize by usage**: Focus on core packages (result, option, either) first
2. **Group by package**: Complete all examples for one package before moving to next
3. **Test incrementally**: Run tests after each file to catch errors early
4. **Follow patterns**: Use existing example test files as templates
5. **Document as you go**: Update this file with progress
## Example Test File Template
```go
// Copyright header...
package packagename_test
import (
"fmt"
PKG "github.com/IBM/fp-go/v2/packagename"
)
func ExampleFunctionName() {
// Copy example from doc comment
// Ensure it compiles and produces correct output
fmt.Println(result)
// Output:
// expected output
}
```
## Notes
- Use `F.Constant1[error](defaultValue)` for GetOrElse in result package
- Use `F.Pipe1` instead of `F.Pipe2` when only one transformation
- Check function signatures carefully for type parameters
- Some functions like `BiMap` are capitalized differently than in docs
- **Prefer `R.Eitherize1(func)` over manual error handling** - converts `func(T) (R, error)` to `func(T) Result[R]`
- Example: Use `R.Eitherize1(strconv.Atoi)` instead of manual if/else error checking
- **Add Go documentation comments to all example functions** - Each example should have a comment explaining what it demonstrates
- **Idiomatic vs Non-Idiomatic packages**:
- Non-idiomatic (e.g., `result`): Uses `Result[A]` type (Either monad)
- Idiomatic (e.g., `idiomatic/result`): Uses `(A, error)` tuples (Go-style)
- Context readers use non-idiomatic `Result[A]` internally

816
v2/IDIOMATIC_COMPARISON.md Normal file
View File

@@ -0,0 +1,816 @@
# Idiomatic vs Standard Package Comparison
> **Latest Update:** 2025-11-18 - Updated with fresh benchmarks after `either` package optimizations
This document provides a comprehensive comparison between the `idiomatic` packages and the standard fp-go packages (`result` and `option`).
**See also:** [BENCHMARK_COMPARISON.md](./BENCHMARK_COMPARISON.md) for detailed performance analysis.
## Table of Contents
1. [Overview](#overview)
2. [Design Differences](#design-differences)
3. [Performance Comparison](#performance-comparison)
4. [API Comparison](#api-comparison)
5. [When to Use Each](#when-to-use-each)
## Overview
The fp-go library provides two approaches to functional programming patterns in Go:
- **Standard Packages** (`result`, `either`, `option`): Use struct wrappers for algebraic data types
- **Idiomatic Packages** (`idiomatic/result`, `idiomatic/option`): Use native Go tuples for the same patterns
### Key Insight
After recent optimizations to the `either` package, both approaches now offer excellent performance:
- **Simple operations** (~1-5 ns/op): Both packages perform comparably
- **Core transformations**: Idiomatic is **1.2-2.3x faster**
- **Complex operations**: Idiomatic is **2-32x faster** with significantly fewer allocations
- **Real-world pipelines**: Idiomatic shows **2-3.4x speedup**
The idiomatic packages provide:
- Consistently better performance across most operations
- Zero allocations for complex operations (ChainFirst: 72 B → 0 B)
- More familiar Go idioms
- Seamless integration with existing Go code
## Design Differences
### Data Representation
#### Standard Result Package
```go
// Uses Either[error, A] which is a struct wrapper
type Result[A any] = Either[error, A]
type Either[E, A any] struct {
r A
l E
isLeft bool
}
// Creating values - ZERO heap allocations (struct returned by value)
success := result.Right[error](42) // Returns Either struct by value (0 B/op)
failure := result.Left[int](err) // Returns Either struct by value (0 B/op)
// Benchmarks confirm:
// BenchmarkRight-16 871258489 1.384 ns/op 0 B/op 0 allocs/op
// BenchmarkLeft-16 683089270 1.761 ns/op 0 B/op 0 allocs/op
```
#### Idiomatic Result Package
```go
// Uses native Go tuples (value, error)
type Kleisli[A, B any] = func(A) (B, error)
type Operator[A, B any] = func(A, error) (B, error)
// Creating values - ZERO allocations (tuples on stack)
success := result.Right(42) // Returns (42, nil) - 0 B/op
failure := result.Left[int](err) // Returns (0, err) - 0 B/op
// Benchmarks confirm:
// BenchmarkRight-16 789879016 1.427 ns/op 0 B/op 0 allocs/op
// BenchmarkLeft-16 895412131 1.349 ns/op 0 B/op 0 allocs/op
```
### Type Signatures
#### Standard Result
```go
// Functions take and return Result[T] structs
func Map[A, B any](f func(A) B) func(Result[A]) Result[B]
func Chain[A, B any](f Kleisli[A, B]) func(Result[A]) Result[B]
func Fold[A, B any](onLeft func(error) B, onRight func(A) B) func(Result[A]) B
// Usage requires wrapping/unwrapping
result := result.Right[error](42)
mapped := result.Map(double)(result)
value, err := result.UnwrapError(mapped)
```
#### Idiomatic Result
```go
// Functions work directly with tuples
func Map[A, B any](f func(A) B) func(A, error) (B, error)
func Chain[A, B any](f Kleisli[A, B]) func(A, error) (B, error)
func Fold[A, B any](onLeft func(error) B, onRight func(A) B) func(A, error) B
// Usage works naturally with Go's error handling
value, err := result.Right(42)
value, err = result.Map(double)(value, err)
// Can use directly: if err != nil { ... }
```
### Memory Layout
#### Standard Result (struct-based)
```
Either[error, int] struct (returned by value):
┌─────────────────────┐
│ r: int (8B) │ Stack allocation: 24 bytes
│ l: error (8B) │ NO heap allocation when returned by value
│ isLeft: bool (1B) │ Benchmarks show 0 B/op, 0 allocs/op
│ padding (7B) │
└─────────────────────┘
Key insight: Go returns small structs (<= ~64 bytes) by value on the stack.
The Either struct (24 bytes) does NOT escape to heap in normal usage.
```
#### Idiomatic Result (tuple-based)
```
(int, error) tuple:
┌─────────────────────┐
│ int: 8 bytes │ Stack allocation: 16 bytes
│ error: 8 bytes │ NO heap allocation
└─────────────────────┘
Both approaches achieve zero heap allocations for constructor operations!
```
### Why Both Have Zero Allocations
Both packages avoid heap allocations for simple operations:
**Standard Either/Result:**
- `Either` struct is small (24 bytes)
- Go returns by value on the stack
- Inlining eliminates function call overhead
- Result: `0 B/op, 0 allocs/op`
**Idiomatic Result:**
- Tuples are native Go multi-value returns
- Always on stack, never heap
- Even simpler than structs
- Result: `0 B/op, 0 allocs/op`
**When Either WOULD escape to heap:**
```go
// Taking address of local Either
func bad1() *Either[error, int] {
e := Right[error](42)
return &e // ESCAPES: pointer to local
}
// Storing in interface
func bad2() interface{} {
return Right[error](42) // ESCAPES: interface boxing
}
// Closure capture with pointer receiver
func bad3() func() Either[error, int] {
e := Right[error](42)
return func() Either[error, int] {
return e // May escape depending on usage
}
}
```
In normal functional composition (Map, Chain, Fold), neither package causes heap allocations for simple operations.
## Performance Comparison
> **Latest benchmarks:** 2025-11-18 after `either` package optimizations
>
> For detailed analysis, see [BENCHMARK_COMPARISON.md](./BENCHMARK_COMPARISON.md)
### Quick Summary (Either vs Idiomatic)
Both packages now show **excellent performance** after optimizations:
| Category | Either | Idiomatic | Winner | Speedup |
|----------|--------|-----------|--------|---------|
| **Constructors** | 1.4-1.8 ns/op | 1.2-1.4 ns/op | **TIE** | ~1.0-1.3x |
| **Predicates** | 1.5 ns/op | 1.3-1.5 ns/op | **TIE** | ~1.0x |
| **Map Operations** | 4.2-7.2 ns/op | 2.5-4.3 ns/op | **Idiomatic** | 1.2-2.1x |
| **Chain Operations** | 4.4-5.4 ns/op | 2.3-2.5 ns/op | **Idiomatic** | 1.8-2.3x |
| **ChainFirst** | **87.6 ns/op** (72 B) | **2.7 ns/op** (0 B) | **Idiomatic** | **32.4x** ✓✓✓ |
| **BiMap** | 11.5-16.8 ns/op | 3.5-3.8 ns/op | **Idiomatic** | 3.3-4.4x |
| **Alt/OrElse** | 4.0-5.7 ns/op | 2.4 ns/op | **Idiomatic** | 1.6-2.4x |
| **GetOrElse** | 6.3-9.0 ns/op | 1.5-2.1 ns/op | **Idiomatic** | 3.1-6.1x |
| **Pipelines** | 75-280 ns/op | 26-116 ns/op | **Idiomatic** | 2.4-3.4x |
### Constructor Operations
| Operation | Either (ns/op) | Idiomatic (ns/op) | Speedup | Winner |
|-----------|----------------|-------------------|---------|--------|
| Left | 1.76 | **1.35** | 1.3x | Idiomatic ✓ |
| Right | 1.38 | 1.43 | ~1.0x | Tie |
| Of | 1.68 | **1.22** | 1.4x | Idiomatic ✓ |
**Analysis:** After optimizations, both packages have comparable constructor performance.
### Core Transformation Operations
| Operation | Either (ns/op) | Idiomatic (ns/op) | Speedup | Winner |
|------------------|----------------|-------------------|---------|--------|
| Map (Right) | 5.13 | **4.34** | 1.2x | Idiomatic ✓ |
| Map (Left) | 4.19 | **2.48** | 1.7x | Idiomatic ✓ |
| MapLeft (Right) | 3.93 | **2.22** | 1.8x | Idiomatic ✓ |
| MapLeft (Left) | 7.22 | **3.51** | 2.1x | Idiomatic ✓ |
| Chain (Right) | 5.44 | **2.34** | 2.3x | Idiomatic ✓ |
| Chain (Left) | 4.44 | **2.53** | 1.8x | Idiomatic ✓ |
### Complex Operations - The Big Difference
| Operation | Either (ns/op) | Idiomatic (ns/op) | Speedup | Either Allocs | Idio Allocs |
|-----------------------|----------------|-------------------|---------|---------------|-------------|
| **ChainFirst (Right)** | **87.62** | **2.71** | **32.4x** ✓✓✓ | 72 B, 3 allocs | **0 B, 0 allocs** |
| ChainFirst (Left) | 3.94 | 2.48 | 1.6x | 0 B | 0 B |
| BiMap (Right) | 16.79 | **3.82** | 4.4x | 0 B | 0 B |
| BiMap (Left) | 11.47 | **3.47** | 3.3x | 0 B | 0 B |
**Critical Insight:** ChainFirst shows the most dramatic difference - **32x faster** with **zero allocations** in idiomatic.
### Pipeline Benchmarks (Real-World Scenarios)
| Operation | Either (ns/op) | Idiomatic (ns/op) | Speedup | Either Allocs | Idio Allocs |
|-----------|----------------|-------------------|---------|---------------|-------------|
| Pipeline Map (Right) | 112.7 | **46.5** | **2.4x** ✓ | 72 B, 3 allocs | 48 B, 2 allocs |
| Pipeline Chain (Right) | 74.4 | **26.1** | **2.9x** ✓ | 48 B, 2 allocs | 24 B, 1 alloc |
| Pipeline Complex (Right)| 279.8 | **116.3** | **2.4x** ✓ | 192 B, 8 allocs | 120 B, 5 allocs |
**Analysis:** In realistic composition scenarios, idiomatic is consistently 2-3x faster with fewer allocations.
### Extraction Operations
| Operation | Either (ns/op) | Idiomatic (ns/op) | Speedup | Winner |
|-----------|----------------|-------------------|---------|--------|
| GetOrElse (Right) | 9.01 | **1.49** | **6.1x** ✓✓ | Idiomatic |
| GetOrElse (Left) | 6.35 | **2.08** | **3.1x** ✓✓ | Idiomatic |
| Alt (Right) | 5.72 | **2.40** | **2.4x** ✓ | Idiomatic |
| Alt (Left) | 4.89 | **2.39** | **2.0x** ✓ | Idiomatic |
| Fold (Right) | 4.03 | **2.75** | **1.5x** ✓ | Idiomatic |
| Fold (Left) | 3.69 | **2.40** | **1.5x** ✓ | Idiomatic |
**Analysis:** Idiomatic shows significant advantages (1.5-6x) for value extraction operations.
### Key Findings After Optimizations
1. **Both packages are now fast** - Simple operations are in the 1-5 ns/op range for both
2. **Idiomatic leads in most operations** - 1.2-2.3x faster for common transformations
3. **ChainFirst is the standout** - 32x faster with zero allocations in idiomatic
4. **Pipelines favor idiomatic** - 2-3.4x faster in realistic composition scenarios
5. **Memory efficiency** - Idiomatic consistently uses fewer allocations
### Performance Summary
**Idiomatic Advantages:**
- **Core operations**: 1.2-2.3x faster for Map, Chain, Fold
- **Complex operations**: 3-32x faster with zero allocations
- **Pipelines**: 2-3.4x faster with significantly fewer allocations
- **Extraction**: 1.5-6x faster for GetOrElse, Alt, Fold
- **Consistency**: Predictable, fast performance across all operations
**Either Advantages:**
- **Comparable performance**: After optimizations, matches idiomatic for simple operations
- **Feature richness**: More operations (Do-notation, Bind, Let, Flatten, Swap)
- **Type flexibility**: Full Either[E, A] with custom error types
- **Zero allocations**: Most simple operations have zero allocations
## API Comparison
### Creating Values
#### Standard Result
```go
import "github.com/IBM/fp-go/v2/result"
// Create success/failure
success := result.Right[error](42)
failure := result.Left[int](errors.New("oops"))
// Type annotation required
var r result.Result[int] = result.Right[error](42)
```
#### Idiomatic Result
```go
import "github.com/IBM/fp-go/v2/idiomatic/result"
// Create success/failure (more concise)
success := result.Right(42) // (42, nil)
failure := result.Left[int](errors.New("oops")) // (0, error)
// Native Go pattern
value, err := result.Right(42)
if err != nil {
// handle error
}
```
### Transforming Values
#### Standard Result
```go
// Map transforms the success value
double := result.Map(N.Mul(2))
result := double(result.Right[error](21)) // Right(42)
// Chain sequences operations
validate := result.Chain(func(x int) result.Result[int] {
if x > 0 {
return result.Right[error](x * 2)
}
return result.Left[int](errors.New("negative"))
})
```
#### Idiomatic Result
```go
// Map transforms the success value
double := result.Map(N.Mul(2))
value, err := double(21, nil) // (42, nil)
// Chain sequences operations
validate := result.Chain(func(x int) (int, error) {
if x > 0 {
return x * 2, nil
}
return 0, errors.New("negative")
})
```
### Pattern Matching
#### Standard Result
```go
// Fold extracts the value
output := result.Fold(
func(err error) string { return "Error: " + err.Error() },
func(n int) string { return fmt.Sprintf("Value: %d", n) },
)(myResult)
// GetOrElse with default
value := result.GetOrElse(func(err error) int { return 0 })(myResult)
```
#### Idiomatic Result
```go
// Fold extracts the value (same API, different input)
output := result.Fold(
func(err error) string { return "Error: " + err.Error() },
func(n int) string { return fmt.Sprintf("Value: %d", n) },
)(value, err)
// GetOrElse with default
value := result.GetOrElse(func(err error) int { return 0 })(value, err)
// Or use native Go pattern
if err != nil {
value = 0
}
```
### Integration with Existing Code
#### Standard Result
```go
// Converting from (value, error) to Result
func doSomething() (int, error) {
return 42, nil
}
result := result.TryCatchError(doSomething())
// Converting back to (value, error)
value, err := result.UnwrapError(result)
```
#### Idiomatic Result
```go
// Direct compatibility with (value, error)
func doSomething() (int, error) {
return 42, nil
}
// No conversion needed!
value, err := doSomething()
value, err = result.Map(double)(value, err)
```
### Pipeline Composition
#### Standard Result
```go
import F "github.com/IBM/fp-go/v2/function"
output := F.Pipe3(
result.Right[error](10),
result.Map(double),
result.Chain(validate),
result.Map(format),
)
// Need to unwrap at the end
value, err := result.UnwrapError(output)
```
#### Idiomatic Result
```go
import F "github.com/IBM/fp-go/v2/function"
value, err := F.Pipe3(
result.Right(10),
result.Map(double),
result.Chain(validate),
result.Map(format),
)
// Already in (value, error) form
if err != nil {
// handle error
}
```
## Detailed Design Comparison
### Type System
#### Standard Result
**Strengths:**
- Full algebraic data type semantics
- Explicit Either[E, A] allows custom error types
- Type-safe by construction
- Clear separation of error and success channels
**Weaknesses:**
- Requires wrapper structs (memory overhead)
- Less familiar to Go developers
- Needs conversion functions for Go's standard library
- More verbose type annotations
#### Idiomatic Result
**Strengths:**
- Native Go idioms (value, error) pattern
- Zero wrapper overhead
- Seamless stdlib integration
- Familiar to all Go developers
- Terser syntax
**Weaknesses:**
- Error type fixed to `error`
- Less explicit about Either semantics
- Cannot use custom error types without conversion
- Slightly less type-safe (can accidentally ignore bool/error)
### Monad Laws
Both packages satisfy the monad laws, but enforce them differently:
#### Standard Result
```go
// Left identity: return a >>= f ≡ f a
assert.Equal(
result.Chain(f)(result.Of(a)),
f(a),
)
// Right identity: m >>= return ≡ m
assert.Equal(
result.Chain(result.Of[int])(m),
m,
)
// Associativity: (m >>= f) >>= g ≡ m >>= (\x -> f x >>= g)
assert.Equal(
result.Chain(g)(result.Chain(f)(m)),
result.Chain(func(x int) result.Result[int] {
return result.Chain(g)(f(x))
})(m),
)
```
#### Idiomatic Result
```go
// Same laws, different syntax
// Left identity
a, aerr := result.Of(val)
b, berr := result.Chain(f)(a, aerr)
c, cerr := f(val)
assert.Equal((b, berr), (c, cerr))
// Right identity
value, err := m()
identity := result.Chain(result.Of[int])
assert.Equal(identity(value, err), (value, err))
// Associativity (same structure, tuple-based)
```
### Error Handling Philosophy
#### Standard Result
```go
// Explicit error handling through types
func processUser(id int) result.Result[User] {
user := fetchUser(id) // Returns Result[User]
return F.Pipe2(
user,
result.Chain(validateUser),
result.Chain(enrichUser),
)
}
// Must explicitly unwrap
user, err := result.UnwrapError(processUser(42))
if err != nil {
log.Error(err)
}
```
#### Idiomatic Result
```go
// Natural Go error handling
func processUser(id int) (User, error) {
user, err := fetchUser(id) // Returns (User, error)
return F.Pipe2(
(user, err),
result.Chain(validateUser),
result.Chain(enrichUser),
)
}
// Already in Go form
user, err := processUser(42)
if err != nil {
log.Error(err)
}
```
### Composition Patterns
#### Standard Result
```go
// Applicative composition
import A "github.com/IBM/fp-go/v2/apply"
type Config struct {
Host string
Port int
DB string
}
config := A.SequenceT3(
result.FromPredicate(validHost, hostError)(host),
result.FromPredicate(validPort, portError)(port),
result.FromPredicate(validDB, dbError)(db),
)(func(h string, p int, d string) Config {
return Config{h, p, d}
})
```
#### Idiomatic Result
```go
// Direct tuple composition
config, err := func() (Config, error) {
host, err := result.FromPredicate(validHost, hostError)(host)
if err != nil {
return Config{}, err
}
port, err := result.FromPredicate(validPort, portError)(port)
if err != nil {
return Config{}, err
}
db, err := result.FromPredicate(validDB, dbError)(db)
if err != nil {
return Config{}, err
}
return Config{host, port, db}, nil
}()
```
## When to Use Each
### Use Idiomatic Result When (Recommended for Most Cases):
1. **Performance Matters**
- Any production service (web servers, APIs, microservices)
- Hot paths and high-throughput scenarios (>1000 req/s)
- Complex operation chains (**32x faster** ChainFirst)
- Real-world pipelines (**2-3x faster**)
- Memory-constrained environments (zero allocations)
- Want **1.2-6x speedup** across most operations
2. **Go Integration** ⭐⭐
- Working with existing Go codebases
- Interfacing with standard library (native (value, error))
- Team familiar with Go, new to FP
- Want zero-cost functional abstractions
- Seamless error handling patterns
3. **Pragmatic Functional Programming**
- Value performance AND functional patterns
- Prefer Go idioms over FP terminology
- Simpler function signatures
- Lower cognitive overhead
- Production-ready patterns
4. **Real-World Applications**
- Web servers, REST APIs, gRPC services
- CLI tools and command-line applications
- Data processing pipelines
- Any latency-sensitive application
- Systems with tight performance budgets
**Performance Gains:** Use idiomatic for 1.2-32x speedup depending on operation, with consistently lower allocations.
### Use Standard Either/Result When:
1. **Type Safety & Flexibility**
- Need explicit Either[E, A] with **custom error types**
- Building domain-specific error hierarchies
- Want to distinguish different error categories at type level
- Type system enforcement is critical
2. **Advanced FP Features**
- Using Do-notation for complex monadic compositions
- Need operations like Flatten, Swap, Bind, Let
- Leveraging advanced type classes (Semigroup, Monoid)
- Want the complete FP toolkit
3. **FP Expertise & Education**
- Porting code from other FP languages (Scala, Haskell)
- Teaching functional programming concepts
- Team has strong FP background
- Explicit algebraic data types preferred
- Code review benefits from FP terminology
4. **Performance is Acceptable**
- After optimizations, Either is **quite fast** (1-5 ns/op for simple operations)
- Difference matters mainly at high scale (millions of operations)
- Code clarity > micro-optimizations
- Simple operations dominate your workload
**Note:** Either package is now performant enough for most use cases. Choose it for features, not performance concerns.
### Hybrid Approach
You can use both packages together:
```go
import (
stdResult "github.com/IBM/fp-go/v2/result"
"github.com/IBM/fp-go/v2/idiomatic/result"
)
// Use standard for complex types
type ValidationError struct {
Field string
Error string
}
func validateInput(input string) stdResult.Either[ValidationError, Input] {
// ... validation logic
}
// Convert to idiomatic for performance
func processInput(input string) (Output, error) {
validated := validateInput(input)
value, err := stdResult.UnwrapError(
stdResult.MapLeft(toError)(validated),
)
// Use idiomatic for hot path
return result.Chain(heavyProcessing)(value, err)
}
```
## Migration Guide
### From Standard to Idiomatic
```go
// Before (standard)
import "github.com/IBM/fp-go/v2/result"
func process(x int) result.Result[int] {
return F.Pipe2(
result.Right[error](x),
result.Map(double),
result.Chain(validate),
)
}
// After (idiomatic)
import "github.com/IBM/fp-go/v2/idiomatic/result"
func process(x int) (int, error) {
return F.Pipe2(
result.Right(x),
result.Map(double),
result.Chain(validate),
)
}
```
### Key Changes
1. **Type signatures**: `Result[T]``(T, error)`
2. **Kleisli**: `func(A) Result[B]``func(A) (B, error)`
3. **Operator**: `func(Result[A]) Result[B]``func(A, error) (B, error)`
4. **Return values**: Function calls return tuples, not wrapped values
5. **Pattern matching**: Same Fold/GetOrElse API, different inputs
## Conclusion
### Performance Summary (After Either Optimizations)
The latest benchmark results show a clear pattern:
**Both packages are now fast**, but idiomatic consistently leads:
- **Constructors & Predicates**: Both ~1-2 ns/op (essentially tied)
- **Core transformations**: Idiomatic **1.2-2.3x faster** (Map, Chain, Fold)
- **Complex operations**: Idiomatic **3-32x faster** (BiMap, ChainFirst)
- **Pipelines**: Idiomatic **2-3.4x faster** with fewer allocations
- **Extraction**: Idiomatic **1.5-6x faster** (GetOrElse, Alt)
**Key Insight:** The idiomatic package delivers **consistently better performance** across the board while maintaining zero-cost abstractions. The Either package is now fast enough for most use cases, but idiomatic is the performance winner.
### Updated Recommendation Matrix
| Scenario | Recommendation | Reason |
|----------|---------------|--------|
| **New Go project** | **Idiomatic** ⭐ | Natural Go patterns, 1.2-6x faster, better integration |
| **Production services** | **Idiomatic** ⭐⭐ | 2-3x faster pipelines, zero allocations, proven performance |
| **Performance critical** | **Idiomatic** ⭐⭐⭐ | 32x faster complex ops, minimal allocations |
| **Microservices/APIs** | **Idiomatic** ⭐⭐ | High throughput, familiar patterns, better performance |
| **CLI Tools** | **Idiomatic** ⭐ | Low overhead, Go idioms, fast |
| Custom error types | Standard/Either | Need Either[E, A] with domain types |
| Learning FP | Standard/Either | Clearer ADT semantics, educational |
| FP-heavy codebase | Standard/Either | Consistency, Do-notation, full FP toolkit |
| Library/Framework | Either way | Both are good; choose based on API style |
### Real-World Impact
For a service handling 10,000 requests/second with typical pipeline operations:
```
Either package: 280 ns/op × 10M req/day = 2,800 seconds = 46.7 minutes
Idiomatic package: 116 ns/op × 10M req/day = 1,160 seconds = 19.3 minutes
Time saved: 27.4 minutes of CPU time per day
```
At scale, this translates to:
- Lower latency (2-3x faster response times for FP operations)
- Reduced CPU usage (fewer cores needed)
- Lower memory pressure (significantly fewer allocations)
- Better resource utilization
### Final Recommendation
**For most Go projects:** Use **idiomatic packages**
- 1.2-32x faster across operations
- Native Go idioms
- Zero-cost abstractions
- Production-proven performance
- Easier integration
**For specialized needs:** Use **standard Either/Result**
- Need custom error types Either[E, A]
- Want Do-notation and advanced FP features
- Porting from FP languages
- Educational/learning context
- FP-heavy existing codebase
### Bottom Line
After optimizations, both packages are excellent:
- **Either/Result**: Fast enough for most use cases, feature-rich, type-safe
- **Idiomatic**: **Faster in practice** (1.2-32x), native Go, zero-cost FP
The idiomatic packages now represent the **best of both worlds**: full functional programming capabilities with Go's native performance and idioms. Unless you specifically need Either[E, A]'s custom error types or advanced FP features, **idiomatic is the recommended choice** for production Go services.
Both maintain the core benefits of functional programming—choose based on whether you prioritize performance & Go integration (idiomatic) or type flexibility & FP features (either).

View File

@@ -0,0 +1,174 @@
# Idiomatic ReadIOResult Functions - Implementation Plan
## Overview
This document outlines the idiomatic functions that should be added to the `readerioresult` package to support Go's native `(value, error)` pattern, similar to what was implemented for `readerresult`.
## Key Concepts
The idiomatic package `github.com/IBM/fp-go/v2/idiomatic/readerioresult` defines:
- `ReaderIOResult[R, A]` as `func(R) func() (A, error)` (idiomatic style)
- This contrasts with `readerioresult.ReaderIOResult[R, A]` which is `Reader[R, IOResult[A]]` (functional style)
## Functions to Add
### In `readerioresult/reader.go`
Add helper functions at the top:
```go
func fromReaderIOResultKleisliI[R, A, B any](f RIORI.Kleisli[R, A, B]) Kleisli[R, A, B] {
return function.Flow2(f, FromReaderIOResultI[R, B])
}
func fromIOResultKleisliI[A, B any](f IORI.Kleisli[A, B]) ioresult.Kleisli[A, B] {
return ioresult.Eitherize1(f)
}
```
### Core Conversion Functions
1. **FromResultI** - Lift `(value, error)` to ReaderIOResult
```go
func FromResultI[R, A any](a A, err error) ReaderIOResult[R, A]
```
2. **FromIOResultI** - Lift idiomatic IOResult to functional
```go
func FromIOResultI[R, A any](ioe func() (A, error)) ReaderIOResult[R, A]
```
3. **FromReaderIOResultI** - Convert idiomatic ReaderIOResult to functional
```go
func FromReaderIOResultI[R, A any](rr RIORI.ReaderIOResult[R, A]) ReaderIOResult[R, A]
```
### Chain Functions
4. **MonadChainI** / **ChainI** - Chain with idiomatic Kleisli
```go
func MonadChainI[R, A, B any](ma ReaderIOResult[R, A], f RIORI.Kleisli[R, A, B]) ReaderIOResult[R, B]
func ChainI[R, A, B any](f RIORI.Kleisli[R, A, B]) Operator[R, A, B]
```
5. **MonadChainEitherIK** / **ChainEitherIK** - Chain with idiomatic Result functions
```go
func MonadChainEitherIK[R, A, B any](ma ReaderIOResult[R, A], f func(A) (B, error)) ReaderIOResult[R, B]
func ChainEitherIK[R, A, B any](f func(A) (B, error)) Operator[R, A, B]
```
6. **MonadChainIOResultIK** / **ChainIOResultIK** - Chain with idiomatic IOResult
```go
func MonadChainIOResultIK[R, A, B any](ma ReaderIOResult[R, A], f func(A) func() (B, error)) ReaderIOResult[R, B]
func ChainIOResultIK[R, A, B any](f func(A) func() (B, error)) Operator[R, A, B]
```
### Applicative Functions
7. **MonadApI** / **ApI** - Apply with idiomatic value
```go
func MonadApI[B, R, A any](fab ReaderIOResult[R, func(A) B], fa RIORI.ReaderIOResult[R, A]) ReaderIOResult[R, B]
func ApI[B, R, A any](fa RIORI.ReaderIOResult[R, A]) Operator[R, func(A) B, B]
```
### Error Handling Functions
8. **OrElseI** - Fallback with idiomatic computation
```go
func OrElseI[R, A any](onLeft RIORI.Kleisli[R, error, A]) Operator[R, A, A]
```
9. **MonadAltI** / **AltI** - Alternative with idiomatic computation
```go
func MonadAltI[R, A any](first ReaderIOResult[R, A], second Lazy[RIORI.ReaderIOResult[R, A]]) ReaderIOResult[R, A]
func AltI[R, A any](second Lazy[RIORI.ReaderIOResult[R, A]]) Operator[R, A, A]
```
### Flatten Functions
10. **FlattenI** - Flatten nested idiomatic ReaderIOResult
```go
func FlattenI[R, A any](mma ReaderIOResult[R, RIORI.ReaderIOResult[R, A]]) ReaderIOResult[R, A]
```
### In `readerioresult/bind.go`
11. **BindI** - Bind with idiomatic Kleisli
```go
func BindI[R, S1, S2, T any](setter func(T) func(S1) S2, f RIORI.Kleisli[R, S1, T]) Operator[R, S1, S2]
```
12. **ApIS** - Apply idiomatic value to state
```go
func ApIS[R, S1, S2, T any](setter func(T) func(S1) S2, fa RIORI.ReaderIOResult[R, T]) Operator[R, S1, S2]
```
13. **ApISL** - Apply idiomatic value using lens
```go
func ApISL[R, S, T any](lens L.Lens[S, T], fa RIORI.ReaderIOResult[R, T]) Operator[R, S, S]
```
14. **BindIL** - Bind idiomatic with lens
```go
func BindIL[R, S, T any](lens L.Lens[S, T], f RIORI.Kleisli[R, T, T]) Operator[R, S, S]
```
15. **BindEitherIK** / **BindResultIK** - Bind idiomatic Result
```go
func BindEitherIK[R, S1, S2, T any](setter func(T) func(S1) S2, f func(S1) (T, error)) Operator[R, S1, S2]
func BindResultIK[R, S1, S2, T any](setter func(T) func(S1) S2, f func(S1) (T, error)) Operator[R, S1, S2]
```
16. **BindIOResultIK** - Bind idiomatic IOResult
```go
func BindIOResultIK[R, S1, S2, T any](setter func(T) func(S1) S2, f func(S1) func() (T, error)) Operator[R, S1, S2]
```
17. **BindToEitherI** / **BindToResultI** - Initialize from idiomatic pair
```go
func BindToEitherI[R, S1, T any](setter func(T) S1) func(T, error) ReaderIOResult[R, S1]
func BindToResultI[R, S1, T any](setter func(T) S1) func(T, error) ReaderIOResult[R, S1]
```
18. **BindToIOResultI** - Initialize from idiomatic IOResult
```go
func BindToIOResultI[R, S1, T any](setter func(T) S1) func(func() (T, error)) ReaderIOResult[R, S1]
```
19. **ApEitherIS** / **ApResultIS** - Apply idiomatic pair to state
```go
func ApEitherIS[R, S1, S2, T any](setter func(T) func(S1) S2) func(T, error) Operator[R, S1, S2]
func ApResultIS[R, S1, S2, T any](setter func(T) func(S1) S2) func(T, error) Operator[R, S1, S2]
```
20. **ApIOResultIS** - Apply idiomatic IOResult to state
```go
func ApIOResultIS[R, S1, S2, T any](setter func(T) func(S1) S2, fa func() (T, error)) Operator[R, S1, S2]
```
## Testing Strategy
Create `readerioresult/idiomatic_test.go` with:
- Tests for each idiomatic function
- Success and error cases
- Integration tests showing real-world usage patterns
- Parallel execution tests where applicable
- Complex scenarios combining multiple idiomatic functions
## Implementation Priority
1. **High Priority** - Core conversion and chain functions (1-6)
2. **Medium Priority** - Bind functions for do-notation (11-16)
3. **Low Priority** - Advanced applicative and error handling (7-10, 17-20)
## Benefits
1. **Seamless Integration** - Mix Go idiomatic code with functional pipelines
2. **Gradual Adoption** - Convert code incrementally from idiomatic to functional
3. **Interoperability** - Work with existing Go libraries that return `(value, error)`
4. **Consistency** - Mirrors the successful pattern from `readerresult`
## References
- See `readerresult` package for similar implementations
- See `idiomatic/readerresult` for the idiomatic types
- See `idiomatic/ioresult` for IO-level idiomatic patterns

View File

@@ -2,25 +2,155 @@
[![Go Reference](https://pkg.go.dev/badge/github.com/IBM/fp-go/v2.svg)](https://pkg.go.dev/github.com/IBM/fp-go/v2)
[![Coverage Status](https://coveralls.io/repos/github/IBM/fp-go/badge.svg?branch=main&flag=v2)](https://coveralls.io/github/IBM/fp-go?branch=main)
[![Go Report Card](https://goreportcard.com/badge/github.com/IBM/fp-go/v2)](https://goreportcard.com/report/github.com/IBM/fp-go/v2)
Version 2 of fp-go leverages [generic type aliases](https://github.com/golang/go/issues/46477) introduced in Go 1.24, providing a more ergonomic and streamlined API.
**fp-go** is a comprehensive functional programming library for Go, bringing type-safe functional patterns inspired by [fp-ts](https://gcanti.github.io/fp-ts/) to the Go ecosystem. Version 2 leverages [generic type aliases](https://github.com/golang/go/issues/46477) introduced in Go 1.24, providing a more ergonomic and streamlined API.
## 📚 Table of Contents
- [Overview](#-overview)
- [Features](#-features)
- [Requirements](#-requirements)
- [Breaking Changes](#-breaking-changes)
- [Installation](#-installation)
- [Quick Start](#-quick-start)
- [Breaking Changes](#️-breaking-changes)
- [Key Improvements](#-key-improvements)
- [Migration Guide](#-migration-guide)
- [Installation](#-installation)
- [What's New](#-whats-new)
- [Documentation](#-documentation)
- [Contributing](#-contributing)
- [License](#-license)
## 🎯 Overview
fp-go brings the power of functional programming to Go with:
- **Type-safe abstractions** - Monads, Functors, Applicatives, and more
- **Composable operations** - Build complex logic from simple, reusable functions
- **Error handling** - Elegant error management with `Either`, `Result`, and `IOEither`
- **Lazy evaluation** - Control when and how computations execute
- **Optics** - Powerful lens, prism, and traversal operations for immutable data manipulation
## ✨ Features
- 🔒 **Type Safety** - Leverage Go's generics for compile-time guarantees
- 🧩 **Composability** - Chain operations naturally with functional composition
- 📦 **Rich Type System** - `Option`, `Either`, `Result`, `IO`, `Reader`, and more
- 🎯 **Practical** - Designed for real-world Go applications
- 🚀 **Performance** - Zero-cost abstractions where possible
- 📖 **Well-documented** - Comprehensive API documentation and examples
- 🧪 **Battle-tested** - Extensive test coverage
## 🔧 Requirements
- **Go 1.24 or later** (for generic type alias support)
## 📦 Installation
```bash
go get github.com/IBM/fp-go/v2
```
## 🚀 Quick Start
### Working with Option
```go
package main
import (
"fmt"
"github.com/IBM/fp-go/v2/option"
N "github.com/IBM/fp-go/v2/number"
)
func main() {
// Create an Option
some := option.Some(42)
none := option.None[int]()
// Map over values
doubled := option.Map(N.Mul(2))(some)
fmt.Println(option.GetOrElse(0)(doubled)) // Output: 84
// Chain operations
result := option.Chain(func(x int) option.Option[string] {
if x > 0 {
return option.Some(fmt.Sprintf("Positive: %d", x))
}
return option.None[string]()
})(some)
fmt.Println(option.GetOrElse("No value")(result)) // Output: Positive: 42
}
```
### Error Handling with Result
```go
package main
import (
"errors"
"fmt"
"github.com/IBM/fp-go/v2/result"
)
func divide(a, b int) result.Result[int] {
if b == 0 {
return result.Error[int](errors.New("division by zero"))
}
return result.Ok(a / b)
}
func main() {
res := divide(10, 2)
// Pattern match on the result
result.Fold(
func(err error) { fmt.Println("Error:", err) },
func(val int) { fmt.Println("Result:", val) },
)(res)
// Output: Result: 5
// Or use GetOrElse for a default value
value := result.GetOrElse(0)(divide(10, 0))
fmt.Println("Value:", value) // Output: Value: 0
}
```
### Composing IO Operations
```go
package main
import (
"fmt"
"github.com/IBM/fp-go/v2/io"
)
func main() {
// Define pure IO operations
readInput := io.MakeIO(func() string {
return "Hello, fp-go!"
})
// Transform the result
uppercase := io.Map(func(s string) string {
return fmt.Sprintf(">>> %s <<<", s)
})(readInput)
// Execute the IO operation
result := uppercase()
fmt.Println(result) // Output: >>> Hello, fp-go! <<<
}
```
## ⚠️ Breaking Changes
### 1. Generic Type Aliases
### From V1 to V2
#### 1. Generic Type Aliases
V2 uses [generic type aliases](https://github.com/golang/go/issues/46477) which require Go 1.24+. This is the most significant change and enables cleaner type definitions.
@@ -34,7 +164,7 @@ type ReaderIOEither[R, E, A any] RD.Reader[R, IOE.IOEither[E, A]]
type ReaderIOEither[R, E, A any] = RD.Reader[R, IOE.IOEither[E, A]]
```
### 2. Generic Type Parameter Ordering
#### 2. Generic Type Parameter Ordering
Type parameters that **cannot** be inferred from function arguments now come first, improving type inference.
@@ -52,7 +182,7 @@ func Ap[B, R, E, A any](fa ReaderIOEither[R, E, A]) func(ReaderIOEither[R, E, fu
This change allows the Go compiler to infer more types automatically, reducing the need for explicit type parameters.
### 3. Pair Monad Semantics
#### 3. Pair Monad Semantics
Monadic operations for `Pair` now operate on the **second argument** to align with the [Haskell definition](https://hackage.haskell.org/package/TypeCompose-0.9.14/docs/Data-Pair.html).
@@ -60,7 +190,7 @@ Monadic operations for `Pair` now operate on the **second argument** to align wi
```go
// Operations on first element
pair := MakePair(1, "hello")
result := Map(func(x int) int { return x * 2 })(pair) // Pair(2, "hello")
result := Map(N.Mul(2))(pair) // Pair(2, "hello")
```
**V2:**
@@ -70,6 +200,36 @@ pair := MakePair(1, "hello")
result := Map(func(s string) string { return s + "!" })(pair) // Pair(1, "hello!")
```
#### 4. Endomorphism Compose Semantics
The `Compose` function for endomorphisms now follows **mathematical function composition** (right-to-left execution), aligning with standard functional programming conventions.
**V1:**
```go
// Compose executed left-to-right
double := N.Mul(2)
increment := N.Add(1)
composed := Compose(double, increment)
result := composed(5) // (5 * 2) + 1 = 11
```
**V2:**
```go
// Compose executes RIGHT-TO-LEFT (mathematical composition)
double := N.Mul(2)
increment := N.Add(1)
composed := Compose(double, increment)
result := composed(5) // (5 + 1) * 2 = 12
// Use MonadChain for LEFT-TO-RIGHT execution
chained := MonadChain(double, increment)
result2 := chained(5) // (5 * 2) + 1 = 11
```
**Key Difference:**
- `Compose(f, g)` now means `f ∘ g`, which applies `g` first, then `f` (right-to-left)
- `MonadChain(f, g)` applies `f` first, then `g` (left-to-right)
## ✨ Key Improvements
### 1. Simplified Type Declarations
@@ -91,16 +251,16 @@ func processData(input string) ET.Either[error, OPT.Option[int]] {
**V2 Approach:**
```go
import (
"github.com/IBM/fp-go/v2/either"
"github.com/IBM/fp-go/v2/result"
"github.com/IBM/fp-go/v2/option"
)
// Define type aliases once
type Either[A any] = either.Either[error, A]
type Result[A any] = result.Result[A]
type Option[A any] = option.Option[A]
// Use them throughout your codebase
func processData(input string) Either[Option[int]] {
func processData(input string) Result[Option[int]] {
// implementation
}
```
@@ -211,7 +371,7 @@ If you're using `Pair`, update operations to work on the second element:
```go
pair := MakePair(42, "data")
// Map operates on first element
result := Map(func(x int) int { return x * 2 })(pair)
result := Map(N.Mul(2))(pair)
```
**After (V2):**
@@ -230,20 +390,14 @@ Create project-wide type aliases for common patterns:
package myapp
import (
"github.com/IBM/fp-go/v2/either"
"github.com/IBM/fp-go/v2/result"
"github.com/IBM/fp-go/v2/option"
"github.com/IBM/fp-go/v2/ioeither"
"github.com/IBM/fp-go/v2/ioresult"
)
type Either[A any] = either.Either[error, A]
type Result[A any] = result.Result[A]
type Option[A any] = option.Option[A]
type IOEither[A any] = ioeither.IOEither[error, A]
```
## 📦 Installation
```bash
go get github.com/IBM/fp-go/v2
type IOResult[A any] = ioresult.IOResult[A]
```
## 🆕 What's New
@@ -277,25 +431,47 @@ func process() IOET.IOEither[error, string] {
**V2 Simplified Example:**
```go
import (
"github.com/IBM/fp-go/v2/either"
"github.com/IBM/fp-go/v2/ioeither"
"strconv"
"github.com/IBM/fp-go/v2/ioresult"
)
type IOEither[A any] = ioeither.IOEither[error, A]
type IOResult[A any] = ioresult.IOResult[A]
func process() IOEither[string] {
return ioeither.Map(
func process() IOResult[string] {
return ioresult.Map(
strconv.Itoa,
)(fetchData())
}
```
## 📚 Additional Resources
## 📚 Documentation
- [Main README](../README.md) - Core concepts and design philosophy
- [API Documentation](https://pkg.go.dev/github.com/IBM/fp-go/v2)
- [Code Samples](../samples/)
- [Go 1.24 Release Notes](https://tip.golang.org/doc/go1.24)
- **[API Documentation](https://pkg.go.dev/github.com/IBM/fp-go/v2)** - Complete API reference
- **[Code Samples](./samples/)** - Practical examples and use cases
- **[Go 1.24 Release Notes](https://tip.golang.org/doc/go1.24)** - Information about generic type aliases
### Core Modules
#### Standard Packages (Struct-based)
- **Option** - Represent optional values without nil
- **Either** - Type-safe error handling with left/right values
- **Result** - Simplified Either with error as left type (recommended for error handling)
- **IO** - Lazy evaluation and side effect management
- **IOResult** - Combine IO with Result for error handling (recommended over IOEither)
- **Reader** - Dependency injection pattern
- **ReaderIOResult** - Combine Reader, IO, and Result for complex workflows
- **Array** - Functional array operations
- **Record** - Functional record/map operations
- **Optics** - Lens, Prism, Optional, and Traversal for immutable updates
#### Idiomatic Packages (Tuple-based, High Performance)
- **idiomatic/option** - Option monad using native Go `(value, bool)` tuples
- **idiomatic/result** - Result monad using native Go `(value, error)` tuples
- **idiomatic/ioresult** - IOResult monad using `func() (value, error)` for IO operations
- **idiomatic/readerresult** - Reader monad combined with Result pattern
- **idiomatic/readerioresult** - Reader monad combined with IOResult pattern
The idiomatic packages offer 2-10x performance improvements and zero allocations by using Go's native tuple patterns instead of struct wrappers. Use them for performance-critical code or when you prefer Go's native error handling style.
## 🤔 Should I Migrate?
@@ -310,10 +486,25 @@ func process() IOEither[string] {
- ⚠️ Migration effort outweighs benefits for your project
- ⚠️ You need stability in production (V2 is newer)
## 🤝 Contributing
Contributions are welcome! Here's how you can help:
1. **Report bugs** - Open an issue with a clear description and reproduction steps
2. **Suggest features** - Share your ideas for improvements
3. **Submit PRs** - Fix bugs or add features (please discuss major changes first)
4. **Improve docs** - Help make the documentation clearer and more comprehensive
Please read our contribution guidelines before submitting pull requests.
## 🐛 Issues and Feedback
Found a bug or have a suggestion? Please [open an issue](https://github.com/IBM/fp-go/issues) on GitHub.
## 📄 License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
This project is licensed under the Apache License 2.0. See the [LICENSE](https://github.com/IBM/fp-go/blob/main/LICENSE) file for details.
---
**Made with ❤️ by IBM**

View File

@@ -17,11 +17,10 @@ package array
import (
G "github.com/IBM/fp-go/v2/array/generic"
EM "github.com/IBM/fp-go/v2/endomorphism"
F "github.com/IBM/fp-go/v2/function"
"github.com/IBM/fp-go/v2/internal/array"
M "github.com/IBM/fp-go/v2/monoid"
O "github.com/IBM/fp-go/v2/option"
"github.com/IBM/fp-go/v2/option"
"github.com/IBM/fp-go/v2/tuple"
)
@@ -50,16 +49,16 @@ func Replicate[A any](n int, a A) []A {
// This is the monadic version of Map that takes the array as the first parameter.
//
//go:inline
func MonadMap[A, B any](as []A, f func(a A) B) []B {
func MonadMap[A, B any](as []A, f func(A) B) []B {
return G.MonadMap[[]A, []B](as, f)
}
// MonadMapRef applies a function to a pointer to each element of an array, returning a new array with the results.
// This is useful when you need to access elements by reference without copying.
func MonadMapRef[A, B any](as []A, f func(a *A) B) []B {
func MonadMapRef[A, B any](as []A, f func(*A) B) []B {
count := len(as)
bs := make([]B, count)
for i := count - 1; i >= 0; i-- {
for i := range count {
bs[i] = f(&as[i])
}
return bs
@@ -68,7 +67,7 @@ func MonadMapRef[A, B any](as []A, f func(a *A) B) []B {
// MapWithIndex applies a function to each element and its index in an array, returning a new array with the results.
//
//go:inline
func MapWithIndex[A, B any](f func(int, A) B) func([]A) []B {
func MapWithIndex[A, B any](f func(int, A) B) Operator[A, B] {
return G.MapWithIndex[[]A, []B](f)
}
@@ -77,39 +76,39 @@ func MapWithIndex[A, B any](f func(int, A) B) func([]A) []B {
//
// Example:
//
// double := array.Map(func(x int) int { return x * 2 })
// double := array.Map(N.Mul(2))
// result := double([]int{1, 2, 3}) // [2, 4, 6]
//
//go:inline
func Map[A, B any](f func(a A) B) func([]A) []B {
func Map[A, B any](f func(A) B) Operator[A, B] {
return G.Map[[]A, []B](f)
}
// MapRef applies a function to a pointer to each element of an array, returning a new array with the results.
// This is the curried version that returns a function.
func MapRef[A, B any](f func(a *A) B) func([]A) []B {
func MapRef[A, B any](f func(*A) B) Operator[A, B] {
return F.Bind2nd(MonadMapRef[A, B], f)
}
func filterRef[A any](fa []A, pred func(a *A) bool) []A {
var result []A
func filterRef[A any](fa []A, pred func(*A) bool) []A {
count := len(fa)
for i := 0; i < count; i++ {
a := fa[i]
if pred(&a) {
result = append(result, a)
var result []A = make([]A, 0, count)
for i := range count {
a := &fa[i]
if pred(a) {
result = append(result, *a)
}
}
return result
}
func filterMapRef[A, B any](fa []A, pred func(a *A) bool, f func(a *A) B) []B {
var result []B
func filterMapRef[A, B any](fa []A, pred func(*A) bool, f func(*A) B) []B {
count := len(fa)
for i := 0; i < count; i++ {
a := fa[i]
if pred(&a) {
result = append(result, f(&a))
var result []B = make([]B, 0, count)
for i := range count {
a := &fa[i]
if pred(a) {
result = append(result, f(a))
}
}
return result
@@ -118,19 +117,19 @@ func filterMapRef[A, B any](fa []A, pred func(a *A) bool, f func(a *A) B) []B {
// Filter returns a new array with all elements from the original array that match a predicate
//
//go:inline
func Filter[A any](pred func(A) bool) EM.Endomorphism[[]A] {
func Filter[A any](pred func(A) bool) Operator[A, A] {
return G.Filter[[]A](pred)
}
// FilterWithIndex returns a new array with all elements from the original array that match a predicate
//
//go:inline
func FilterWithIndex[A any](pred func(int, A) bool) EM.Endomorphism[[]A] {
func FilterWithIndex[A any](pred func(int, A) bool) Operator[A, A] {
return G.FilterWithIndex[[]A](pred)
}
// FilterRef returns a new array with all elements from the original array that match a predicate operating on pointers.
func FilterRef[A any](pred func(*A) bool) EM.Endomorphism[[]A] {
func FilterRef[A any](pred func(*A) bool) Operator[A, A] {
return F.Bind2nd(filterRef[A], pred)
}
@@ -138,7 +137,7 @@ func FilterRef[A any](pred func(*A) bool) EM.Endomorphism[[]A] {
// This is the monadic version that takes the array as the first parameter.
//
//go:inline
func MonadFilterMap[A, B any](fa []A, f func(A) O.Option[B]) []B {
func MonadFilterMap[A, B any](fa []A, f option.Kleisli[A, B]) []B {
return G.MonadFilterMap[[]A, []B](fa, f)
}
@@ -146,33 +145,33 @@ func MonadFilterMap[A, B any](fa []A, f func(A) O.Option[B]) []B {
// keeping only the Some values. This is the monadic version that takes the array as the first parameter.
//
//go:inline
func MonadFilterMapWithIndex[A, B any](fa []A, f func(int, A) O.Option[B]) []B {
func MonadFilterMapWithIndex[A, B any](fa []A, f func(int, A) Option[B]) []B {
return G.MonadFilterMapWithIndex[[]A, []B](fa, f)
}
// FilterMap maps an array with an iterating function that returns an [O.Option] and it keeps only the Some values discarding the Nones.
// FilterMap maps an array with an iterating function that returns an [Option] and it keeps only the Some values discarding the Nones.
//
//go:inline
func FilterMap[A, B any](f func(A) O.Option[B]) func([]A) []B {
func FilterMap[A, B any](f option.Kleisli[A, B]) Operator[A, B] {
return G.FilterMap[[]A, []B](f)
}
// FilterMapWithIndex maps an array with an iterating function that returns an [O.Option] and it keeps only the Some values discarding the Nones.
// FilterMapWithIndex maps an array with an iterating function that returns an [Option] and it keeps only the Some values discarding the Nones.
//
//go:inline
func FilterMapWithIndex[A, B any](f func(int, A) O.Option[B]) func([]A) []B {
func FilterMapWithIndex[A, B any](f func(int, A) Option[B]) Operator[A, B] {
return G.FilterMapWithIndex[[]A, []B](f)
}
// FilterChain maps an array with an iterating function that returns an [O.Option] of an array. It keeps only the Some values discarding the Nones and then flattens the result.
// FilterChain maps an array with an iterating function that returns an [Option] of an array. It keeps only the Some values discarding the Nones and then flattens the result.
//
//go:inline
func FilterChain[A, B any](f func(A) O.Option[[]B]) func([]A) []B {
func FilterChain[A, B any](f option.Kleisli[A, []B]) Operator[A, B] {
return G.FilterChain[[]A](f)
}
// FilterMapRef filters an array using a predicate on pointers and maps the matching elements using a function on pointers.
func FilterMapRef[A, B any](pred func(a *A) bool, f func(a *A) B) func([]A) []B {
func FilterMapRef[A, B any](pred func(a *A) bool, f func(*A) B) Operator[A, B] {
return func(fa []A) []B {
return filterMapRef(fa, pred, f)
}
@@ -180,8 +179,7 @@ func FilterMapRef[A, B any](pred func(a *A) bool, f func(a *A) B) func([]A) []B
func reduceRef[A, B any](fa []A, f func(B, *A) B, initial B) B {
current := initial
count := len(fa)
for i := 0; i < count; i++ {
for i := range len(fa) {
current = f(current, &fa[i])
}
return current
@@ -262,6 +260,8 @@ func Empty[A any]() []A {
}
// Zero returns an empty array of type A (alias for Empty).
//
//go:inline
func Zero[A any]() []A {
return Empty[A]()
}
@@ -277,7 +277,7 @@ func Of[A any](a A) []A {
// This is the monadic version that takes the array as the first parameter (also known as FlatMap).
//
//go:inline
func MonadChain[A, B any](fa []A, f func(a A) []B) []B {
func MonadChain[A, B any](fa []A, f Kleisli[A, B]) []B {
return G.MonadChain(fa, f)
}
@@ -290,7 +290,7 @@ func MonadChain[A, B any](fa []A, f func(a A) []B) []B {
// result := duplicate([]int{1, 2, 3}) // [1, 1, 2, 2, 3, 3]
//
//go:inline
func Chain[A, B any](f func(A) []B) func([]A) []B {
func Chain[A, B any](f Kleisli[A, B]) Operator[A, B] {
return G.Chain[[]A](f)
}
@@ -306,7 +306,7 @@ func MonadAp[B, A any](fab []func(A) B, fa []A) []B {
// This is the curried version.
//
//go:inline
func Ap[B, A any](fa []A) func([]func(A) B) []B {
func Ap[B, A any](fa []A) Operator[func(A) B, B] {
return G.Ap[[]B, []func(A) B](fa)
}
@@ -328,7 +328,7 @@ func MatchLeft[A, B any](onEmpty func() B, onNonEmpty func(A, []A) B) func([]A)
// Returns None if the array is empty.
//
//go:inline
func Tail[A any](as []A) O.Option[[]A] {
func Tail[A any](as []A) Option[[]A] {
return G.Tail(as)
}
@@ -336,7 +336,7 @@ func Tail[A any](as []A) O.Option[[]A] {
// Returns None if the array is empty.
//
//go:inline
func Head[A any](as []A) O.Option[A] {
func Head[A any](as []A) Option[A] {
return G.Head(as)
}
@@ -344,7 +344,7 @@ func Head[A any](as []A) O.Option[A] {
// Returns None if the array is empty.
//
//go:inline
func First[A any](as []A) O.Option[A] {
func First[A any](as []A) Option[A] {
return G.First(as)
}
@@ -352,12 +352,12 @@ func First[A any](as []A) O.Option[A] {
// Returns None if the array is empty.
//
//go:inline
func Last[A any](as []A) O.Option[A] {
func Last[A any](as []A) Option[A] {
return G.Last(as)
}
// PrependAll inserts a separator before each element of an array.
func PrependAll[A any](middle A) EM.Endomorphism[[]A] {
func PrependAll[A any](middle A) Operator[A, A] {
return func(as []A) []A {
count := len(as)
dst := count * 2
@@ -377,7 +377,7 @@ func PrependAll[A any](middle A) EM.Endomorphism[[]A] {
// Example:
//
// result := array.Intersperse(0)([]int{1, 2, 3}) // [1, 0, 2, 0, 3]
func Intersperse[A any](middle A) EM.Endomorphism[[]A] {
func Intersperse[A any](middle A) Operator[A, A] {
prepend := PrependAll(middle)
return func(as []A) []A {
if IsEmpty(as) {
@@ -406,7 +406,7 @@ func Flatten[A any](mma [][]A) []A {
}
// Slice extracts a subarray from index low (inclusive) to high (exclusive).
func Slice[A any](low, high int) func(as []A) []A {
func Slice[A any](low, high int) Operator[A, A] {
return array.Slice[[]A](low, high)
}
@@ -414,7 +414,7 @@ func Slice[A any](low, high int) func(as []A) []A {
// Returns None if the index is out of bounds.
//
//go:inline
func Lookup[A any](idx int) func([]A) O.Option[A] {
func Lookup[A any](idx int) func([]A) Option[A] {
return G.Lookup[[]A](idx)
}
@@ -422,7 +422,7 @@ func Lookup[A any](idx int) func([]A) O.Option[A] {
// If the index is out of bounds, the element is appended.
//
//go:inline
func UpsertAt[A any](a A) EM.Endomorphism[[]A] {
func UpsertAt[A any](a A) Operator[A, A] {
return G.UpsertAt[[]A](a)
}
@@ -468,7 +468,7 @@ func ConstNil[A any]() []A {
// SliceRight extracts a subarray from the specified start index to the end.
//
//go:inline
func SliceRight[A any](start int) EM.Endomorphism[[]A] {
func SliceRight[A any](start int) Operator[A, A] {
return G.SliceRight[[]A](start)
}
@@ -482,7 +482,7 @@ func Copy[A any](b []A) []A {
// Clone creates a deep copy of the array using the provided endomorphism to clone the values
//
//go:inline
func Clone[A any](f func(A) A) func(as []A) []A {
func Clone[A any](f func(A) A) Operator[A, A] {
return G.Clone[[]A](f)
}
@@ -510,8 +510,8 @@ func Fold[A any](m M.Monoid[A]) func([]A) A {
// Push adds an element to the end of an array (alias for Append).
//
//go:inline
func Push[A any](a A) EM.Endomorphism[[]A] {
return G.Push[EM.Endomorphism[[]A]](a)
func Push[A any](a A) Operator[A, A] {
return G.Push[Operator[A, A]](a)
}
// MonadFlap applies a value to an array of functions, producing an array of results.
@@ -526,13 +526,99 @@ func MonadFlap[B, A any](fab []func(A) B, a A) []B {
// This is the curried version.
//
//go:inline
func Flap[B, A any](a A) func([]func(A) B) []B {
func Flap[B, A any](a A) Operator[func(A) B, B] {
return G.Flap[func(A) B, []func(A) B, []B](a)
}
// Prepend adds an element to the beginning of an array, returning a new array.
//
//go:inline
func Prepend[A any](head A) EM.Endomorphism[[]A] {
return G.Prepend[EM.Endomorphism[[]A]](head)
func Prepend[A any](head A) Operator[A, A] {
return G.Prepend[Operator[A, A]](head)
}
// Reverse returns a new slice with elements in reverse order.
// This function creates a new slice containing all elements from the input slice
// in reverse order, without modifying the original slice.
//
// Type Parameters:
// - A: The type of elements in the slice
//
// Parameters:
// - as: The input slice to reverse
//
// Returns:
// - A new slice with elements in reverse order
//
// Behavior:
// - Creates a new slice with the same length as the input
// - Copies elements from the input slice in reverse order
// - Does not modify the original slice
// - Returns an empty slice if the input is empty
// - Returns a single-element slice unchanged if input has one element
//
// Example:
//
// numbers := []int{1, 2, 3, 4, 5}
// reversed := array.Reverse(numbers)
// // reversed: []int{5, 4, 3, 2, 1}
// // numbers: []int{1, 2, 3, 4, 5} (unchanged)
//
// Example with strings:
//
// words := []string{"hello", "world", "foo", "bar"}
// reversed := array.Reverse(words)
// // reversed: []string{"bar", "foo", "world", "hello"}
//
// Example with empty slice:
//
// empty := []int{}
// reversed := array.Reverse(empty)
// // reversed: []int{} (empty slice)
//
// Example with single element:
//
// single := []string{"only"}
// reversed := array.Reverse(single)
// // reversed: []string{"only"}
//
// Use cases:
// - Reversing the order of elements for display or processing
// - Implementing stack-like behavior (LIFO)
// - Processing data in reverse chronological order
// - Reversing transformation pipelines
// - Creating palindrome checks
// - Implementing undo/redo functionality
//
// Example with processing in reverse:
//
// events := []string{"start", "middle", "end"}
// reversed := array.Reverse(events)
// // Process events in reverse order
// for _, event := range reversed {
// fmt.Println(event) // Prints: "end", "middle", "start"
// }
//
// Example with functional composition:
//
// numbers := []int{1, 2, 3, 4, 5}
// result := F.Pipe2(
// numbers,
// array.Map(N.Mul(2)),
// array.Reverse,
// )
// // result: []int{10, 8, 6, 4, 2}
//
// Performance:
// - Time complexity: O(n) where n is the length of the slice
// - Space complexity: O(n) for the new slice
// - Does not allocate if the input slice is empty
//
// Note: This function is immutable - it does not modify the original slice.
// If you need to reverse a slice in-place, consider using a different approach
// or modifying the slice directly.
//
//go:inline
func Reverse[A any](as []A) []A {
return G.Reverse(as)
}

View File

@@ -35,7 +35,7 @@ func TestReplicate(t *testing.T) {
func TestMonadMap(t *testing.T) {
src := []int{1, 2, 3}
result := MonadMap(src, func(x int) int { return x * 2 })
result := MonadMap(src, N.Mul(2))
assert.Equal(t, []int{2, 4, 6}, result)
}
@@ -173,8 +173,8 @@ func TestChain(t *testing.T) {
func TestMonadAp(t *testing.T) {
fns := []func(int) int{
func(x int) int { return x * 2 },
func(x int) int { return x + 10 },
N.Mul(2),
N.Add(10),
}
values := []int{1, 2}
result := MonadAp(fns, values)
@@ -268,7 +268,7 @@ func TestCopy(t *testing.T) {
func TestClone(t *testing.T) {
src := []int{1, 2, 3}
cloner := Clone(func(x int) int { return x * 2 })
cloner := Clone(N.Mul(2))
result := cloner(src)
assert.Equal(t, []int{2, 4, 6}, result)
}

View File

@@ -22,6 +22,7 @@ import (
F "github.com/IBM/fp-go/v2/function"
"github.com/IBM/fp-go/v2/internal/utils"
N "github.com/IBM/fp-go/v2/number"
O "github.com/IBM/fp-go/v2/option"
S "github.com/IBM/fp-go/v2/string"
T "github.com/IBM/fp-go/v2/tuple"
@@ -214,3 +215,262 @@ func ExampleFoldMap() {
// Output: ABC
}
// TestReverse tests the Reverse function
func TestReverse(t *testing.T) {
t.Run("Reverse integers", func(t *testing.T) {
input := []int{1, 2, 3, 4, 5}
result := Reverse(input)
expected := []int{5, 4, 3, 2, 1}
assert.Equal(t, expected, result)
})
t.Run("Reverse strings", func(t *testing.T) {
input := []string{"hello", "world", "foo", "bar"}
result := Reverse(input)
expected := []string{"bar", "foo", "world", "hello"}
assert.Equal(t, expected, result)
})
t.Run("Reverse empty slice", func(t *testing.T) {
input := []int{}
result := Reverse(input)
assert.Equal(t, []int{}, result)
})
t.Run("Reverse single element", func(t *testing.T) {
input := []string{"only"}
result := Reverse(input)
assert.Equal(t, []string{"only"}, result)
})
t.Run("Reverse two elements", func(t *testing.T) {
input := []int{1, 2}
result := Reverse(input)
assert.Equal(t, []int{2, 1}, result)
})
t.Run("Does not modify original slice", func(t *testing.T) {
original := []int{1, 2, 3, 4, 5}
originalCopy := []int{1, 2, 3, 4, 5}
_ = Reverse(original)
assert.Equal(t, originalCopy, original)
})
t.Run("Reverse with floats", func(t *testing.T) {
input := []float64{1.1, 2.2, 3.3}
result := Reverse(input)
expected := []float64{3.3, 2.2, 1.1}
assert.Equal(t, expected, result)
})
t.Run("Reverse with structs", func(t *testing.T) {
type Person struct {
Name string
Age int
}
input := []Person{
{"Alice", 30},
{"Bob", 25},
{"Charlie", 35},
}
result := Reverse(input)
expected := []Person{
{"Charlie", 35},
{"Bob", 25},
{"Alice", 30},
}
assert.Equal(t, expected, result)
})
t.Run("Reverse with pointers", func(t *testing.T) {
a, b, c := 1, 2, 3
input := []*int{&a, &b, &c}
result := Reverse(input)
assert.Equal(t, []*int{&c, &b, &a}, result)
})
t.Run("Double reverse returns original order", func(t *testing.T) {
original := []int{1, 2, 3, 4, 5}
reversed := Reverse(original)
doubleReversed := Reverse(reversed)
assert.Equal(t, original, doubleReversed)
})
t.Run("Reverse with large slice", func(t *testing.T) {
input := MakeBy(1000, F.Identity[int])
result := Reverse(input)
// Check first and last elements
assert.Equal(t, 999, result[0])
assert.Equal(t, 0, result[999])
// Check length
assert.Equal(t, 1000, len(result))
})
t.Run("Reverse palindrome", func(t *testing.T) {
input := []int{1, 2, 3, 2, 1}
result := Reverse(input)
assert.Equal(t, input, result)
})
}
// TestReverseComposition tests Reverse with other array operations
func TestReverseComposition(t *testing.T) {
t.Run("Reverse after Map", func(t *testing.T) {
input := []int{1, 2, 3, 4, 5}
result := F.Pipe2(
input,
Map(N.Mul(2)),
Reverse[int],
)
expected := []int{10, 8, 6, 4, 2}
assert.Equal(t, expected, result)
})
t.Run("Map after Reverse", func(t *testing.T) {
input := []int{1, 2, 3, 4, 5}
result := F.Pipe2(
input,
Reverse[int],
Map(N.Mul(2)),
)
expected := []int{10, 8, 6, 4, 2}
assert.Equal(t, expected, result)
})
t.Run("Reverse with Filter", func(t *testing.T) {
input := []int{1, 2, 3, 4, 5, 6}
result := F.Pipe2(
input,
Filter(func(n int) bool { return n%2 == 0 }),
Reverse[int],
)
expected := []int{6, 4, 2}
assert.Equal(t, expected, result)
})
t.Run("Reverse with Reduce", func(t *testing.T) {
input := []string{"a", "b", "c"}
reversed := Reverse(input)
result := Reduce(func(acc, val string) string {
return acc + val
}, "")(reversed)
assert.Equal(t, "cba", result)
})
t.Run("Reverse with Flatten", func(t *testing.T) {
input := [][]int{{1, 2}, {3, 4}, {5, 6}}
result := F.Pipe2(
input,
Reverse[[]int],
Flatten[int],
)
expected := []int{5, 6, 3, 4, 1, 2}
assert.Equal(t, expected, result)
})
}
// TestReverseUseCases demonstrates practical use cases for Reverse
func TestReverseUseCases(t *testing.T) {
t.Run("Process events in reverse chronological order", func(t *testing.T) {
events := []string{"2024-01-01", "2024-01-02", "2024-01-03"}
reversed := Reverse(events)
// Most recent first
assert.Equal(t, "2024-01-03", reversed[0])
assert.Equal(t, "2024-01-01", reversed[2])
})
t.Run("Implement stack behavior (LIFO)", func(t *testing.T) {
stack := []int{1, 2, 3, 4, 5}
reversed := Reverse(stack)
// Pop from reversed (LIFO)
assert.Equal(t, 5, reversed[0])
assert.Equal(t, 4, reversed[1])
})
t.Run("Reverse string characters", func(t *testing.T) {
chars := []rune("hello")
reversed := Reverse(chars)
result := string(reversed)
assert.Equal(t, "olleh", result)
})
t.Run("Check palindrome", func(t *testing.T) {
word := []rune("racecar")
reversed := Reverse(word)
assert.Equal(t, word, reversed)
notPalindrome := []rune("hello")
reversedNot := Reverse(notPalindrome)
assert.NotEqual(t, notPalindrome, reversedNot)
})
t.Run("Reverse transformation pipeline", func(t *testing.T) {
// Apply transformations in reverse order
numbers := []int{1, 2, 3}
// Normal: add 10, then multiply by 2
normal := F.Pipe2(
numbers,
Map(N.Add(10)),
Map(N.Mul(2)),
)
// Reversed order of operations
reversed := F.Pipe2(
numbers,
Map(N.Mul(2)),
Map(N.Add(10)),
)
assert.NotEqual(t, normal, reversed)
assert.Equal(t, []int{22, 24, 26}, normal)
assert.Equal(t, []int{12, 14, 16}, reversed)
})
}
// TestReverseProperties tests mathematical properties of Reverse
func TestReverseProperties(t *testing.T) {
t.Run("Involution property: Reverse(Reverse(x)) == x", func(t *testing.T) {
testCases := [][]int{
{1, 2, 3, 4, 5},
{1},
{},
{1, 2},
{5, 4, 3, 2, 1},
}
for _, original := range testCases {
result := Reverse(Reverse(original))
assert.Equal(t, original, result)
}
})
t.Run("Length preservation: len(Reverse(x)) == len(x)", func(t *testing.T) {
testCases := [][]int{
{1, 2, 3, 4, 5},
{1},
{},
MakeBy(100, F.Identity[int]),
}
for _, input := range testCases {
result := Reverse(input)
assert.Equal(t, len(input), len(result))
}
})
t.Run("First element becomes last", func(t *testing.T) {
input := []int{1, 2, 3, 4, 5}
result := Reverse(input)
if len(input) > 0 {
assert.Equal(t, input[0], result[len(result)-1])
assert.Equal(t, input[len(input)-1], result[0])
}
})
}

View File

@@ -56,8 +56,8 @@ func Do[S any](
//go:inline
func Bind[S1, S2, T any](
setter func(T) func(S1) S2,
f func(S1) []T,
) func([]S1) []S2 {
f Kleisli[S1, T],
) Operator[S1, S2] {
return G.Bind[[]S1, []S2](setter, f)
}
@@ -79,7 +79,7 @@ func Bind[S1, S2, T any](
func Let[S1, S2, T any](
setter func(T) func(S1) S2,
f func(S1) T,
) func([]S1) []S2 {
) Operator[S1, S2] {
return G.Let[[]S1, []S2](setter, f)
}
@@ -101,7 +101,7 @@ func Let[S1, S2, T any](
func LetTo[S1, S2, T any](
setter func(T) func(S1) S2,
b T,
) func([]S1) []S2 {
) Operator[S1, S2] {
return G.LetTo[[]S1, []S2](setter, b)
}
@@ -120,7 +120,7 @@ func LetTo[S1, S2, T any](
//go:inline
func BindTo[S1, T any](
setter func(T) S1,
) func([]T) []S1 {
) Operator[T, S1] {
return G.BindTo[[]S1, []T](setter)
}
@@ -143,6 +143,6 @@ func BindTo[S1, T any](
func ApS[S1, S2, T any](
setter func(T) func(S1) S2,
fa []T,
) func([]S1) []S2 {
) Operator[S1, S2] {
return G.ApS[[]S1, []S2](setter, fa)
}

View File

@@ -36,7 +36,7 @@
// generated := array.MakeBy(5, func(i int) int { return i * 2 })
//
// // Transforming arrays
// doubled := array.Map(func(x int) int { return x * 2 })(arr)
// doubled := array.Map(N.Mul(2))(arr)
// filtered := array.Filter(func(x int) bool { return x > 2 })(arr)
//
// // Combining arrays
@@ -50,7 +50,7 @@
// numbers := []int{1, 2, 3, 4, 5}
//
// // Map transforms each element
// doubled := array.Map(func(x int) int { return x * 2 })(numbers)
// doubled := array.Map(N.Mul(2))(numbers)
// // Result: [2, 4, 6, 8, 10]
//
// // Filter keeps elements matching a predicate

View File

@@ -16,22 +16,11 @@
package array
import (
"slices"
E "github.com/IBM/fp-go/v2/eq"
)
func equals[T any](left []T, right []T, eq func(T, T) bool) bool {
if len(left) != len(right) {
return false
}
for i, v1 := range left {
v2 := right[i]
if !eq(v1, v2) {
return false
}
}
return true
}
// Eq creates an equality checker for arrays given an equality checker for elements.
// Two arrays are considered equal if they have the same length and all corresponding
// elements are equal according to the provided Eq instance.
@@ -46,6 +35,11 @@ func equals[T any](left []T, right []T, eq func(T, T) bool) bool {
func Eq[T any](e E.Eq[T]) E.Eq[[]T] {
eq := e.Equals
return E.FromEquals(func(left, right []T) bool {
return equals(left, right, eq)
return slices.EqualFunc(left, right, eq)
})
}
//go:inline
func StrictEquals[T comparable]() E.Eq[[]T] {
return E.FromEquals(slices.Equal[[]T])
}

View File

@@ -17,7 +17,7 @@ package array
import (
G "github.com/IBM/fp-go/v2/array/generic"
O "github.com/IBM/fp-go/v2/option"
"github.com/IBM/fp-go/v2/option"
)
// FindFirst finds the first element which satisfies a predicate function.
@@ -30,7 +30,7 @@ import (
// result2 := findGreaterThan3([]int{1, 2, 3}) // None
//
//go:inline
func FindFirst[A any](pred func(A) bool) func([]A) O.Option[A] {
func FindFirst[A any](pred func(A) bool) option.Kleisli[[]A, A] {
return G.FindFirst[[]A](pred)
}
@@ -45,7 +45,7 @@ func FindFirst[A any](pred func(A) bool) func([]A) O.Option[A] {
// result := findEvenAtEvenIndex([]int{1, 3, 4, 5}) // Some(4)
//
//go:inline
func FindFirstWithIndex[A any](pred func(int, A) bool) func([]A) O.Option[A] {
func FindFirstWithIndex[A any](pred func(int, A) bool) option.Kleisli[[]A, A] {
return G.FindFirstWithIndex[[]A](pred)
}
@@ -65,7 +65,7 @@ func FindFirstWithIndex[A any](pred func(int, A) bool) func([]A) O.Option[A] {
// result := parseFirst([]string{"a", "42", "b"}) // Some(42)
//
//go:inline
func FindFirstMap[A, B any](sel func(A) O.Option[B]) func([]A) O.Option[B] {
func FindFirstMap[A, B any](sel option.Kleisli[A, B]) option.Kleisli[[]A, B] {
return G.FindFirstMap[[]A](sel)
}
@@ -73,7 +73,7 @@ func FindFirstMap[A, B any](sel func(A) O.Option[B]) func([]A) O.Option[B] {
// The selector receives both the index and the element.
//
//go:inline
func FindFirstMapWithIndex[A, B any](sel func(int, A) O.Option[B]) func([]A) O.Option[B] {
func FindFirstMapWithIndex[A, B any](sel func(int, A) Option[B]) option.Kleisli[[]A, B] {
return G.FindFirstMapWithIndex[[]A](sel)
}
@@ -86,7 +86,7 @@ func FindFirstMapWithIndex[A, B any](sel func(int, A) O.Option[B]) func([]A) O.O
// result := findGreaterThan3([]int{1, 4, 2, 5}) // Some(5)
//
//go:inline
func FindLast[A any](pred func(A) bool) func([]A) O.Option[A] {
func FindLast[A any](pred func(A) bool) option.Kleisli[[]A, A] {
return G.FindLast[[]A](pred)
}
@@ -94,7 +94,7 @@ func FindLast[A any](pred func(A) bool) func([]A) O.Option[A] {
// Returns Some(element) if found, None if no element matches.
//
//go:inline
func FindLastWithIndex[A any](pred func(int, A) bool) func([]A) O.Option[A] {
func FindLastWithIndex[A any](pred func(int, A) bool) option.Kleisli[[]A, A] {
return G.FindLastWithIndex[[]A](pred)
}
@@ -102,7 +102,7 @@ func FindLastWithIndex[A any](pred func(int, A) bool) func([]A) O.Option[A] {
// This combines finding and mapping in a single operation, searching from the end.
//
//go:inline
func FindLastMap[A, B any](sel func(A) O.Option[B]) func([]A) O.Option[B] {
func FindLastMap[A, B any](sel option.Kleisli[A, B]) option.Kleisli[[]A, B] {
return G.FindLastMap[[]A](sel)
}
@@ -110,6 +110,6 @@ func FindLastMap[A, B any](sel func(A) O.Option[B]) func([]A) O.Option[B] {
// The selector receives both the index and the element, searching from the end.
//
//go:inline
func FindLastMapWithIndex[A, B any](sel func(int, A) O.Option[B]) func([]A) O.Option[B] {
func FindLastMapWithIndex[A, B any](sel func(int, A) Option[B]) option.Kleisli[[]A, B] {
return G.FindLastMapWithIndex[[]A](sel)
}

View File

@@ -25,8 +25,10 @@ import (
)
// Of constructs a single element array
//
//go:inline
func Of[GA ~[]A, A any](value A) GA {
return GA{value}
return array.Of[GA](value)
}
func Reduce[GA ~[]A, A, B any](f func(B, A) B, initial B) func(GA) B {
@@ -82,7 +84,7 @@ func MakeBy[AS ~[]A, F ~func(int) A, A any](n int, f F) AS {
}
// run the generator function across the input
as := make(AS, n)
for i := n - 1; i >= 0; i-- {
for i := range n {
as[i] = f(i)
}
return as
@@ -138,22 +140,27 @@ func Empty[GA ~[]A, A any]() GA {
return array.Empty[GA]()
}
//go:inline
func UpsertAt[GA ~[]A, A any](a A) func(GA) GA {
return array.UpsertAt[GA](a)
}
//go:inline
func MonadMap[GA ~[]A, GB ~[]B, A, B any](as GA, f func(a A) B) GB {
return array.MonadMap[GA, GB](as, f)
}
//go:inline
func Map[GA ~[]A, GB ~[]B, A, B any](f func(a A) B) func(GA) GB {
return array.Map[GA, GB](f)
}
//go:inline
func MonadMapWithIndex[GA ~[]A, GB ~[]B, A, B any](as GA, f func(int, A) B) GB {
return array.MonadMapWithIndex[GA, GB](as, f)
}
//go:inline
func MapWithIndex[GA ~[]A, GB ~[]B, A, B any](f func(int, A) B) func(GA) GB {
return F.Bind2nd(MonadMapWithIndex[GA, GB, A, B], f)
}
@@ -165,10 +172,9 @@ func Size[GA ~[]A, A any](as GA) int {
func filterMap[GA ~[]A, GB ~[]B, A, B any](fa GA, f func(A) O.Option[B]) GB {
result := make(GB, 0, len(fa))
for _, a := range fa {
O.Map(func(b B) B {
if b, ok := O.Unwrap(f(a)); ok {
result = append(result, b)
return b
})(f(a))
}
}
return result
}
@@ -176,10 +182,9 @@ func filterMap[GA ~[]A, GB ~[]B, A, B any](fa GA, f func(A) O.Option[B]) GB {
func filterMapWithIndex[GA ~[]A, GB ~[]B, A, B any](fa GA, f func(int, A) O.Option[B]) GB {
result := make(GB, 0, len(fa))
for i, a := range fa {
O.Map(func(b B) B {
if b, ok := O.Unwrap(f(i, a)); ok {
result = append(result, b)
return b
})(f(i, a))
}
}
return result
}
@@ -297,7 +302,7 @@ func MatchLeft[AS ~[]A, A, B any](onEmpty func() B, onNonEmpty func(A, AS) B) fu
}
//go:inline
func Slice[AS ~[]A, A any](start int, end int) func(AS) AS {
func Slice[AS ~[]A, A any](start, end int) func(AS) AS {
return array.Slice[AS](start, end)
}
@@ -361,6 +366,12 @@ func Flap[FAB ~func(A) B, GFAB ~[]FAB, GB ~[]B, A, B any](a A) func(GFAB) GB {
return FC.Flap(Map[GFAB, GB], a)
}
//go:inline
func Prepend[ENDO ~func(AS) AS, AS []A, A any](head A) ENDO {
return array.Prepend[ENDO](head)
}
//go:inline
func Reverse[GT ~[]T, T any](as GT) GT {
return array.Reverse(as)
}

View File

@@ -42,8 +42,7 @@ func FindFirst[AS ~[]A, PRED ~func(A) bool, A any](pred PRED) func(AS) O.Option[
func FindFirstMapWithIndex[AS ~[]A, PRED ~func(int, A) O.Option[B], A, B any](pred PRED) func(AS) O.Option[B] {
none := O.None[B]()
return func(as AS) O.Option[B] {
count := len(as)
for i := 0; i < count; i++ {
for i := range len(as) {
out := pred(i, as[i])
if O.IsSome(out) {
return out

View File

@@ -0,0 +1,34 @@
package generic
import (
"github.com/IBM/fp-go/v2/internal/array"
M "github.com/IBM/fp-go/v2/monoid"
S "github.com/IBM/fp-go/v2/semigroup"
)
// Monoid returns a Monoid instance for arrays.
// The Monoid combines arrays through concatenation, with an empty array as the identity element.
//
// Example:
//
// m := array.Monoid[int]()
// result := m.Concat([]int{1, 2}, []int{3, 4}) // [1, 2, 3, 4]
// empty := m.Empty() // []
//
//go:inline
func Monoid[GT ~[]T, T any]() M.Monoid[GT] {
return M.MakeMonoid(array.Concat[GT], Empty[GT]())
}
// Semigroup returns a Semigroup instance for arrays.
// The Semigroup combines arrays through concatenation.
//
// Example:
//
// s := array.Semigroup[int]()
// result := s.Concat([]int{1, 2}, []int{3, 4}) // [1, 2, 3, 4]
//
//go:inline
func Semigroup[GT ~[]T, T any]() S.Semigroup[GT] {
return S.MakeSemigroup(array.Concat[GT])
}

View File

@@ -26,7 +26,7 @@ import (
func ZipWith[AS ~[]A, BS ~[]B, CS ~[]C, FCT ~func(A, B) C, A, B, C any](fa AS, fb BS, f FCT) CS {
l := N.Min(len(fa), len(fb))
res := make(CS, l)
for i := l - 1; i >= 0; i-- {
for i := range l {
res[i] = f(fa[i], fb[i])
}
return res
@@ -43,7 +43,7 @@ func Unzip[AS ~[]A, BS ~[]B, CS ~[]T.Tuple2[A, B], A, B any](cs CS) T.Tuple2[AS,
l := len(cs)
as := make(AS, l)
bs := make(BS, l)
for i := l - 1; i >= 0; i-- {
for i := range l {
t := cs[i]
as[i] = t.F1
bs[i] = t.F2

View File

@@ -18,7 +18,6 @@ package array
import (
"testing"
O "github.com/IBM/fp-go/v2/option"
OR "github.com/IBM/fp-go/v2/ord"
"github.com/stretchr/testify/assert"
)
@@ -103,39 +102,6 @@ func TestSortByKey(t *testing.T) {
assert.Equal(t, "Charlie", result[2].Name)
}
func TestMonadTraverse(t *testing.T) {
result := MonadTraverse(
O.Of[[]int],
O.Map[[]int, func(int) []int],
O.Ap[[]int, int],
[]int{1, 3, 5},
func(n int) O.Option[int] {
if n%2 == 1 {
return O.Some(n * 2)
}
return O.None[int]()
},
)
assert.Equal(t, O.Some([]int{2, 6, 10}), result)
// Test with None case
result2 := MonadTraverse(
O.Of[[]int],
O.Map[[]int, func(int) []int],
O.Ap[[]int, int],
[]int{1, 2, 3},
func(n int) O.Option[int] {
if n%2 == 1 {
return O.Some(n * 2)
}
return O.None[int]()
},
)
assert.Equal(t, O.None[[]int](), result2)
}
func TestUniqByKey(t *testing.T) {
type Person struct {
Name string

View File

@@ -16,27 +16,12 @@
package array
import (
G "github.com/IBM/fp-go/v2/array/generic"
"github.com/IBM/fp-go/v2/internal/array"
M "github.com/IBM/fp-go/v2/monoid"
S "github.com/IBM/fp-go/v2/semigroup"
)
func concat[T any](left, right []T) []T {
// some performance checks
ll := len(left)
if ll == 0 {
return right
}
lr := len(right)
if lr == 0 {
return left
}
// need to copy
buf := make([]T, ll+lr)
copy(buf[copy(buf, left):], right)
return buf
}
// Monoid returns a Monoid instance for arrays.
// The Monoid combines arrays through concatenation, with an empty array as the identity element.
//
@@ -45,8 +30,10 @@ func concat[T any](left, right []T) []T {
// m := array.Monoid[int]()
// result := m.Concat([]int{1, 2}, []int{3, 4}) // [1, 2, 3, 4]
// empty := m.Empty() // []
//
//go:inline
func Monoid[T any]() M.Monoid[[]T] {
return M.MakeMonoid(concat[T], Empty[T]())
return G.Monoid[[]T]()
}
// Semigroup returns a Semigroup instance for arrays.
@@ -56,8 +43,10 @@ func Monoid[T any]() M.Monoid[[]T] {
//
// s := array.Semigroup[int]()
// result := s.Concat([]int{1, 2}, []int{3, 4}) // [1, 2, 3, 4]
//
//go:inline
func Semigroup[T any]() S.Semigroup[[]T] {
return S.MakeSemigroup(concat[T])
return G.Semigroup[[]T]()
}
func addLen[A any](count int, data []A) int {

View File

@@ -18,14 +18,11 @@ package nonempty
import (
G "github.com/IBM/fp-go/v2/array/generic"
EM "github.com/IBM/fp-go/v2/endomorphism"
F "github.com/IBM/fp-go/v2/function"
"github.com/IBM/fp-go/v2/internal/array"
"github.com/IBM/fp-go/v2/option"
S "github.com/IBM/fp-go/v2/semigroup"
)
// NonEmptyArray represents an array with at least one element
type NonEmptyArray[A any] []A
// Of constructs a single element array
func Of[A any](first A) NonEmptyArray[A] {
return G.Of[NonEmptyArray[A]](first)
@@ -44,20 +41,24 @@ func From[A any](first A, data ...A) NonEmptyArray[A] {
return buffer
}
//go:inline
func IsEmpty[A any](_ NonEmptyArray[A]) bool {
return false
}
//go:inline
func IsNonEmpty[A any](_ NonEmptyArray[A]) bool {
return true
}
//go:inline
func MonadMap[A, B any](as NonEmptyArray[A], f func(a A) B) NonEmptyArray[B] {
return G.MonadMap[NonEmptyArray[A], NonEmptyArray[B]](as, f)
}
func Map[A, B any](f func(a A) B) func(NonEmptyArray[A]) NonEmptyArray[B] {
return F.Bind2nd(MonadMap[A, B], f)
//go:inline
func Map[A, B any](f func(a A) B) Operator[A, B] {
return G.Map[NonEmptyArray[A], NonEmptyArray[B]](f)
}
func Reduce[A, B any](f func(B, A) B, initial B) func(NonEmptyArray[A]) B {
@@ -72,22 +73,27 @@ func ReduceRight[A, B any](f func(A, B) B, initial B) func(NonEmptyArray[A]) B {
}
}
//go:inline
func Tail[A any](as NonEmptyArray[A]) []A {
return as[1:]
}
//go:inline
func Head[A any](as NonEmptyArray[A]) A {
return as[0]
}
//go:inline
func First[A any](as NonEmptyArray[A]) A {
return as[0]
}
//go:inline
func Last[A any](as NonEmptyArray[A]) A {
return as[len(as)-1]
}
//go:inline
func Size[A any](as NonEmptyArray[A]) int {
return G.Size(as)
}
@@ -96,11 +102,11 @@ func Flatten[A any](mma NonEmptyArray[NonEmptyArray[A]]) NonEmptyArray[A] {
return G.Flatten(mma)
}
func MonadChain[A, B any](fa NonEmptyArray[A], f func(a A) NonEmptyArray[B]) NonEmptyArray[B] {
func MonadChain[A, B any](fa NonEmptyArray[A], f Kleisli[A, B]) NonEmptyArray[B] {
return G.MonadChain(fa, f)
}
func Chain[A, B any](f func(A) NonEmptyArray[B]) func(NonEmptyArray[A]) NonEmptyArray[B] {
func Chain[A, B any](f func(A) NonEmptyArray[B]) Operator[A, B] {
return G.Chain[NonEmptyArray[A]](f)
}
@@ -134,3 +140,89 @@ func Fold[A any](s S.Semigroup[A]) func(NonEmptyArray[A]) A {
func Prepend[A any](head A) EM.Endomorphism[NonEmptyArray[A]] {
return array.Prepend[EM.Endomorphism[NonEmptyArray[A]]](head)
}
// ToNonEmptyArray attempts to convert a regular slice into a NonEmptyArray.
// This function provides a safe way to create a NonEmptyArray from a slice that might be empty,
// returning an Option type to handle the case where the input slice is empty.
//
// Type Parameters:
// - A: The element type of the array
//
// Parameters:
// - as: A regular slice that may or may not be empty
//
// Returns:
// - Option[NonEmptyArray[A]]: Some(NonEmptyArray) if the input slice is non-empty, None if empty
//
// Behavior:
// - If the input slice is empty, returns None
// - If the input slice has at least one element, wraps it in Some and returns it as a NonEmptyArray
// - The conversion is a type cast, so no data is copied
//
// Example:
//
// // Convert non-empty slice
// numbers := []int{1, 2, 3}
// result := ToNonEmptyArray(numbers) // Some(NonEmptyArray[1, 2, 3])
//
// // Convert empty slice
// empty := []int{}
// result := ToNonEmptyArray(empty) // None
//
// // Use with Option methods
// numbers := []int{1, 2, 3}
// result := ToNonEmptyArray(numbers)
// if O.IsSome(result) {
// nea := O.GetOrElse(F.Constant(From(0)))(result)
// head := Head(nea) // 1
// }
//
// Use cases:
// - Safely converting user input or external data to NonEmptyArray
// - Validating that a collection has at least one element before processing
// - Converting results from functions that return regular slices
// - Ensuring type safety when working with collections that must not be empty
//
// Example with validation:
//
// func processItems(items []string) Option[string] {
// return F.Pipe2(
// items,
// ToNonEmptyArray[string],
// O.Map(func(nea NonEmptyArray[string]) string {
// return Head(nea) // Safe to get head since we know it's non-empty
// }),
// )
// }
//
// Example with error handling:
//
// items := []int{1, 2, 3}
// result := ToNonEmptyArray(items)
// switch {
// case O.IsSome(result):
// nea := O.GetOrElse(F.Constant(From(0)))(result)
// fmt.Println("First item:", Head(nea))
// case O.IsNone(result):
// fmt.Println("Array is empty")
// }
//
// Example with chaining:
//
// // Process only if non-empty
// result := F.Pipe3(
// []int{1, 2, 3},
// ToNonEmptyArray[int],
// O.Map(Map(func(x int) int { return x * 2 })),
// O.Map(Head[int]),
// ) // Some(2)
//
// Note: This function is particularly useful when working with APIs or functions
// that return regular slices but you need the type-level guarantee that the
// collection is non-empty for subsequent operations.
func ToNonEmptyArray[A any](as []A) Option[NonEmptyArray[A]] {
if G.IsEmpty(as) {
return option.None[NonEmptyArray[A]]()
}
return option.Some(NonEmptyArray[A](as))
}

View File

@@ -0,0 +1,370 @@
// Copyright (c) 2023 - 2025 IBM Corp.
// All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package nonempty
import (
"testing"
F "github.com/IBM/fp-go/v2/function"
O "github.com/IBM/fp-go/v2/option"
"github.com/stretchr/testify/assert"
)
// TestToNonEmptyArray tests the ToNonEmptyArray function
func TestToNonEmptyArray(t *testing.T) {
t.Run("Convert non-empty slice of integers", func(t *testing.T) {
input := []int{1, 2, 3}
result := ToNonEmptyArray(input)
assert.True(t, O.IsSome(result))
nea := O.GetOrElse(F.Constant(From(0)))(result)
assert.Equal(t, 3, Size(nea))
assert.Equal(t, 1, Head(nea))
assert.Equal(t, 3, Last(nea))
})
t.Run("Convert empty slice returns None", func(t *testing.T) {
input := []int{}
result := ToNonEmptyArray(input)
assert.True(t, O.IsNone(result))
})
t.Run("Convert single element slice", func(t *testing.T) {
input := []string{"hello"}
result := ToNonEmptyArray(input)
assert.True(t, O.IsSome(result))
nea := O.GetOrElse(F.Constant(From("")))(result)
assert.Equal(t, 1, Size(nea))
assert.Equal(t, "hello", Head(nea))
})
t.Run("Convert non-empty slice of strings", func(t *testing.T) {
input := []string{"a", "b", "c", "d"}
result := ToNonEmptyArray(input)
assert.True(t, O.IsSome(result))
nea := O.GetOrElse(F.Constant(From("")))(result)
assert.Equal(t, 4, Size(nea))
assert.Equal(t, "a", Head(nea))
assert.Equal(t, "d", Last(nea))
})
t.Run("Convert nil slice returns None", func(t *testing.T) {
var input []int
result := ToNonEmptyArray(input)
assert.True(t, O.IsNone(result))
})
t.Run("Convert slice with struct elements", func(t *testing.T) {
type Person struct {
Name string
Age int
}
input := []Person{
{Name: "Alice", Age: 30},
{Name: "Bob", Age: 25},
}
result := ToNonEmptyArray(input)
assert.True(t, O.IsSome(result))
nea := O.GetOrElse(F.Constant(From(Person{})))(result)
assert.Equal(t, 2, Size(nea))
assert.Equal(t, "Alice", Head(nea).Name)
})
t.Run("Convert slice with pointer elements", func(t *testing.T) {
val1, val2 := 10, 20
input := []*int{&val1, &val2}
result := ToNonEmptyArray(input)
assert.True(t, O.IsSome(result))
nea := O.GetOrElse(F.Constant(From[*int](nil)))(result)
assert.Equal(t, 2, Size(nea))
assert.Equal(t, 10, *Head(nea))
})
t.Run("Convert large slice", func(t *testing.T) {
input := make([]int, 1000)
for i := range input {
input[i] = i
}
result := ToNonEmptyArray(input)
assert.True(t, O.IsSome(result))
nea := O.GetOrElse(F.Constant(From(0)))(result)
assert.Equal(t, 1000, Size(nea))
assert.Equal(t, 0, Head(nea))
assert.Equal(t, 999, Last(nea))
})
t.Run("Convert slice with float64 elements", func(t *testing.T) {
input := []float64{1.5, 2.5, 3.5}
result := ToNonEmptyArray(input)
assert.True(t, O.IsSome(result))
nea := O.GetOrElse(F.Constant(From(0.0)))(result)
assert.Equal(t, 3, Size(nea))
assert.Equal(t, 1.5, Head(nea))
})
t.Run("Convert slice with boolean elements", func(t *testing.T) {
input := []bool{true, false, true}
result := ToNonEmptyArray(input)
assert.True(t, O.IsSome(result))
nea := O.GetOrElse(F.Constant(From(false)))(result)
assert.Equal(t, 3, Size(nea))
assert.True(t, Head(nea))
})
}
// TestToNonEmptyArrayWithOption tests ToNonEmptyArray with Option operations
func TestToNonEmptyArrayWithOption(t *testing.T) {
t.Run("Chain with Map to process elements", func(t *testing.T) {
input := []int{1, 2, 3}
result := F.Pipe2(
input,
ToNonEmptyArray[int],
O.Map(Map(func(x int) int { return x * 2 })),
)
assert.True(t, O.IsSome(result))
nea := O.GetOrElse(F.Constant(From(0)))(result)
assert.Equal(t, 2, Head(nea))
assert.Equal(t, 6, Last(nea))
})
t.Run("Chain with Map to get head", func(t *testing.T) {
input := []string{"first", "second", "third"}
result := F.Pipe2(
input,
ToNonEmptyArray[string],
O.Map(Head[string]),
)
assert.True(t, O.IsSome(result))
value := O.GetOrElse(F.Constant(""))(result)
assert.Equal(t, "first", value)
})
t.Run("GetOrElse with default value for empty slice", func(t *testing.T) {
input := []int{}
defaultValue := From(42)
result := F.Pipe2(
input,
ToNonEmptyArray[int],
O.GetOrElse(F.Constant(defaultValue)),
)
assert.Equal(t, 1, Size(result))
assert.Equal(t, 42, Head(result))
})
t.Run("GetOrElse with default value for non-empty slice", func(t *testing.T) {
input := []int{1, 2, 3}
defaultValue := From(42)
result := F.Pipe2(
input,
ToNonEmptyArray[int],
O.GetOrElse(F.Constant(defaultValue)),
)
assert.Equal(t, 3, Size(result))
assert.Equal(t, 1, Head(result))
})
t.Run("Fold with Some case", func(t *testing.T) {
input := []int{1, 2, 3}
result := F.Pipe2(
input,
ToNonEmptyArray[int],
O.Fold(
F.Constant(0),
func(nea NonEmptyArray[int]) int { return Head(nea) },
),
)
assert.Equal(t, 1, result)
})
t.Run("Fold with None case", func(t *testing.T) {
input := []int{}
result := F.Pipe2(
input,
ToNonEmptyArray[int],
O.Fold(
F.Constant(-1),
func(nea NonEmptyArray[int]) int { return Head(nea) },
),
)
assert.Equal(t, -1, result)
})
}
// TestToNonEmptyArrayComposition tests composing ToNonEmptyArray with other operations
func TestToNonEmptyArrayComposition(t *testing.T) {
t.Run("Compose with filter-like operation", func(t *testing.T) {
input := []int{1, 2, 3, 4, 5}
// Filter even numbers then convert
filtered := []int{}
for _, x := range input {
if x%2 == 0 {
filtered = append(filtered, x)
}
}
result := ToNonEmptyArray(filtered)
assert.True(t, O.IsSome(result))
nea := O.GetOrElse(F.Constant(From(0)))(result)
assert.Equal(t, 2, Size(nea))
assert.Equal(t, 2, Head(nea))
})
t.Run("Compose with map operation before conversion", func(t *testing.T) {
input := []int{1, 2, 3}
// Map then convert
mapped := make([]int, len(input))
for i, x := range input {
mapped[i] = x * 10
}
result := ToNonEmptyArray(mapped)
assert.True(t, O.IsSome(result))
nea := O.GetOrElse(F.Constant(From(0)))(result)
assert.Equal(t, 10, Head(nea))
assert.Equal(t, 30, Last(nea))
})
t.Run("Chain multiple Option operations", func(t *testing.T) {
input := []int{5, 10, 15}
result := F.Pipe3(
input,
ToNonEmptyArray[int],
O.Map(Map(func(x int) int { return x / 5 })),
O.Map(func(nea NonEmptyArray[int]) int {
return Head(nea) + Last(nea)
}),
)
assert.True(t, O.IsSome(result))
value := O.GetOrElse(F.Constant(0))(result)
assert.Equal(t, 4, value) // 1 + 3
})
}
// TestToNonEmptyArrayUseCases demonstrates practical use cases
func TestToNonEmptyArrayUseCases(t *testing.T) {
t.Run("Validate user input has at least one item", func(t *testing.T) {
// Simulate user input
userInput := []string{"item1", "item2"}
result := ToNonEmptyArray(userInput)
if O.IsSome(result) {
nea := O.GetOrElse(F.Constant(From("")))(result)
firstItem := Head(nea)
assert.Equal(t, "item1", firstItem)
} else {
t.Fatal("Expected Some but got None")
}
})
t.Run("Process only non-empty collections", func(t *testing.T) {
processItems := func(items []int) Option[int] {
return F.Pipe2(
items,
ToNonEmptyArray[int],
O.Map(func(nea NonEmptyArray[int]) int {
// Safe to use Head since we know it's non-empty
return Head(nea) * 2
}),
)
}
result1 := processItems([]int{5, 10, 15})
assert.True(t, O.IsSome(result1))
assert.Equal(t, 10, O.GetOrElse(F.Constant(0))(result1))
result2 := processItems([]int{})
assert.True(t, O.IsNone(result2))
})
t.Run("Convert API response to NonEmptyArray", func(t *testing.T) {
// Simulate API response
type APIResponse struct {
Items []string
}
response := APIResponse{Items: []string{"data1", "data2", "data3"}}
result := F.Pipe2(
response.Items,
ToNonEmptyArray[string],
O.Map(func(nea NonEmptyArray[string]) string {
return "First item: " + Head(nea)
}),
)
assert.True(t, O.IsSome(result))
message := O.GetOrElse(F.Constant("No items"))(result)
assert.Equal(t, "First item: data1", message)
})
t.Run("Ensure collection is non-empty before processing", func(t *testing.T) {
calculateAverage := func(numbers []float64) Option[float64] {
return F.Pipe2(
numbers,
ToNonEmptyArray[float64],
O.Map(func(nea NonEmptyArray[float64]) float64 {
sum := 0.0
for _, n := range nea {
sum += n
}
return sum / float64(Size(nea))
}),
)
}
result1 := calculateAverage([]float64{10.0, 20.0, 30.0})
assert.True(t, O.IsSome(result1))
assert.Equal(t, 20.0, O.GetOrElse(F.Constant(0.0))(result1))
result2 := calculateAverage([]float64{})
assert.True(t, O.IsNone(result2))
})
t.Run("Safe head extraction with type guarantee", func(t *testing.T) {
getFirstOrDefault := func(items []string, defaultValue string) string {
return F.Pipe2(
items,
ToNonEmptyArray[string],
O.Fold(
F.Constant(defaultValue),
Head[string],
),
)
}
result1 := getFirstOrDefault([]string{"a", "b", "c"}, "default")
assert.Equal(t, "a", result1)
result2 := getFirstOrDefault([]string{}, "default")
assert.Equal(t, "default", result2)
})
}

View File

@@ -0,0 +1,20 @@
package nonempty
import "github.com/IBM/fp-go/v2/option"
type (
// NonEmptyArray represents an array that is guaranteed to have at least one element.
// This provides compile-time safety for operations that require non-empty collections.
NonEmptyArray[A any] []A
// Kleisli represents a Kleisli arrow for the NonEmptyArray monad.
// It's a function from A to NonEmptyArray[B], used for composing operations that produce non-empty arrays.
Kleisli[A, B any] = func(A) NonEmptyArray[B]
// Operator represents a function that transforms one NonEmptyArray into another.
// It takes a NonEmptyArray[A] and produces a NonEmptyArray[B].
Operator[A, B any] = Kleisli[NonEmptyArray[A], B]
// Option represents an optional value that may or may not be present.
Option[A any] = option.Option[A]
)

View File

@@ -16,10 +16,18 @@
package array
import (
F "github.com/IBM/fp-go/v2/function"
"github.com/IBM/fp-go/v2/internal/array"
M "github.com/IBM/fp-go/v2/monoid"
O "github.com/IBM/fp-go/v2/option"
)
func MonadSequence[HKTA, HKTRA any](
fof func(HKTA) HKTRA,
m M.Monoid[HKTRA],
ma []HKTA) HKTRA {
return array.MonadSequence(fof, m.Empty(), m.Concat, ma)
}
// Sequence takes an array where elements are HKT<A> (higher kinded type) and,
// using an applicative of that HKT, returns an HKT of []A.
//
@@ -55,16 +63,11 @@ import (
// option.MonadAp[[]int, int],
// )
// result := seq(opts) // Some([1, 2, 3])
func Sequence[A, HKTA, HKTRA, HKTFRA any](
_of func([]A) HKTRA,
_map func(HKTRA, func([]A) func(A) []A) HKTFRA,
_ap func(HKTFRA, HKTA) HKTRA,
func Sequence[HKTA, HKTRA any](
fof func(HKTA) HKTRA,
m M.Monoid[HKTRA],
) func([]HKTA) HKTRA {
ca := F.Curry2(Append[A])
empty := _of(Empty[A]())
return Reduce(func(fas HKTRA, fa HKTA) HKTRA {
return _ap(_map(fas, ca), fa)
}, empty)
return array.Sequence[[]HKTA](fof, m.Empty(), m.Concat)
}
// ArrayOption returns a function to convert a sequence of options into an option of a sequence.
@@ -86,10 +89,10 @@ func Sequence[A, HKTA, HKTRA, HKTFRA any](
// option.Some(3),
// }
// result2 := array.ArrayOption[int]()(opts2) // None
func ArrayOption[A any]() func([]O.Option[A]) O.Option[[]A] {
return Sequence(
O.Of[[]A],
O.MonadMap[[]A, func(A) []A],
O.MonadAp[[]A, A],
func ArrayOption[A any](ma []Option[A]) Option[[]A] {
return MonadSequence(
O.Map(Of[A]),
O.ApplicativeMonoid(Monoid[A]()),
ma,
)
}

View File

@@ -24,8 +24,7 @@ import (
)
func TestSequenceOption(t *testing.T) {
seq := ArrayOption[int]()
assert.Equal(t, O.Of([]int{1, 3}), seq([]O.Option[int]{O.Of(1), O.Of(3)}))
assert.Equal(t, O.None[[]int](), seq([]O.Option[int]{O.Of(1), O.None[int]()}))
assert.Equal(t, O.Of([]int{1, 3}), ArrayOption([]O.Option[int]{O.Of(1), O.Of(3)}))
assert.Equal(t, O.None[[]int](), ArrayOption([]O.Option[int]{O.Of(1), O.None[int]()}))
}

View File

@@ -18,6 +18,7 @@ package array
import (
"testing"
N "github.com/IBM/fp-go/v2/number"
"github.com/stretchr/testify/assert"
)
@@ -243,7 +244,7 @@ func TestSliceComposition(t *testing.T) {
t.Run("slice then map", func(t *testing.T) {
sliced := Slice[int](2, 5)(data)
mapped := Map(func(x int) int { return x * 2 })(sliced)
mapped := Map(N.Mul(2))(sliced)
assert.Equal(t, []int{4, 6, 8}, mapped)
})

View File

@@ -32,7 +32,7 @@ import (
// // Result: [1, 1, 2, 3, 4, 5, 6, 9]
//
//go:inline
func Sort[T any](ord O.Ord[T]) func(ma []T) []T {
func Sort[T any](ord O.Ord[T]) Operator[T, T] {
return G.Sort[[]T](ord)
}
@@ -62,7 +62,7 @@ func Sort[T any](ord O.Ord[T]) func(ma []T) []T {
// // Result: [{"Bob", 25}, {"Alice", 30}, {"Charlie", 35}]
//
//go:inline
func SortByKey[K, T any](ord O.Ord[K], f func(T) K) func(ma []T) []T {
func SortByKey[K, T any](ord O.Ord[K], f func(T) K) Operator[T, T] {
return G.SortByKey[[]T](ord, f)
}
@@ -93,6 +93,6 @@ func SortByKey[K, T any](ord O.Ord[K], f func(T) K) func(ma []T) []T {
// // Result: [{"Jones", "Bob"}, {"Smith", "Alice"}, {"Smith", "John"}]
//
//go:inline
func SortBy[T any](ord []O.Ord[T]) func(ma []T) []T {
func SortBy[T any](ord []O.Ord[T]) Operator[T, T] {
return G.SortBy[[]T](ord)
}

View File

@@ -80,3 +80,25 @@ func MonadTraverse[A, B, HKTB, HKTAB, HKTRB any](
return array.MonadTraverse(fof, fmap, fap, ta, f)
}
//go:inline
func TraverseWithIndex[A, B, HKTB, HKTAB, HKTRB any](
fof func([]B) HKTRB,
fmap func(func([]B) func(B) []B) func(HKTRB) HKTAB,
fap func(HKTB) func(HKTAB) HKTRB,
f func(int, A) HKTB) func([]A) HKTRB {
return array.TraverseWithIndex[[]A](fof, fmap, fap, f)
}
//go:inline
func MonadTraverseWithIndex[A, B, HKTB, HKTAB, HKTRB any](
fof func([]B) HKTRB,
fmap func(func([]B) func(B) []B) func(HKTRB) HKTAB,
fap func(HKTB) func(HKTAB) HKTRB,
ta []A,
f func(int, A) HKTB) HKTRB {
return array.MonadTraverseWithIndex(fof, fmap, fap, ta, f)
}

16
v2/array/types.go Normal file
View File

@@ -0,0 +1,16 @@
package array
import "github.com/IBM/fp-go/v2/option"
type (
// Kleisli represents a Kleisli arrow for arrays.
// It's a function from A to []B, used for composing operations that produce arrays.
Kleisli[A, B any] = func(A) []B
// Operator represents a function that transforms one array into another.
// It takes a []A and produces a []B.
Operator[A, B any] = Kleisli[[]A, B]
// Option represents an optional value that may or may not be present.
Option[A any] = option.Option[A]
)

View File

@@ -46,6 +46,6 @@ func StrictUniq[A comparable](as []A) []A {
// // Result: [{"Alice", 30}, {"Bob", 25}, {"Charlie", 30}]
//
//go:inline
func Uniq[A any, K comparable](f func(A) K) func(as []A) []A {
func Uniq[A any, K comparable](f func(A) K) Operator[A, A] {
return G.Uniq[[]A](f)
}

710
v2/assert/assert.go Normal file
View File

@@ -0,0 +1,710 @@
// Copyright (c) 2023 - 2025 IBM Corp.
// All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package assert provides functional assertion helpers for testing.
//
// This package wraps testify/assert functions in a Reader monad pattern,
// allowing for composable and functional test assertions. Each assertion
// returns a Reader that takes a *testing.T and performs the assertion.
//
// # Data Last Principle
//
// This package follows the "data last" functional programming principle, where
// the data being operated on comes as the last parameter in a chain of function
// applications. This design enables several powerful functional programming patterns:
//
// 1. **Partial Application**: You can create reusable assertion functions by providing
// configuration parameters first, leaving the data and testing context for later.
//
// 2. **Function Composition**: Assertions can be composed and combined before being
// applied to actual data.
//
// 3. **Point-Free Style**: You can pass assertion functions around without immediately
// providing the data they operate on.
//
// The general pattern is:
//
// assert.Function(config)(data)(testingContext)
// ↑ ↑ ↑
// expected actual *testing.T (always last)
//
// For single-parameter assertions:
//
// assert.Function(data)(testingContext)
// ↑ ↑
// actual *testing.T (always last)
//
// Examples of "data last" in action:
//
// // Multi-parameter: expected value → actual value → testing context
// assert.Equal(42)(result)(t)
// assert.ArrayContains(3)(numbers)(t)
//
// // Single-parameter: data → testing context
// assert.NoError(err)(t)
// assert.ArrayNotEmpty(arr)(t)
//
// // Partial application - create reusable assertions
// isPositive := assert.That(N.MoreThan(0))
// // Later, apply to different values:
// isPositive(42)(t) // Passes
// isPositive(-5)(t) // Fails
//
// // Composition - combine assertions before applying data
// validateUser := func(u User) assert.Reader {
// return assert.AllOf([]assert.Reader{
// assert.Equal("Alice")(u.Name),
// assert.That(func(age int) bool { return age >= 18 })(u.Age),
// })
// }
// validateUser(user)(t)
//
// The package supports:
// - Equality and inequality assertions
// - Collection assertions (arrays, maps, strings)
// - Error handling assertions
// - Result type assertions
// - Custom predicate assertions
// - Composable test suites
//
// Example:
//
// func TestExample(t *testing.T) {
// value := 42
// assert.Equal(42)(value)(t) // Curried style
//
// // Composing multiple assertions
// arr := []int{1, 2, 3}
// assertions := assert.AllOf([]assert.Reader{
// assert.ArrayNotEmpty(arr),
// assert.ArrayLength[int](3)(arr),
// assert.ArrayContains(2)(arr),
// })
// assertions(t)
// }
package assert
import (
"fmt"
"testing"
"github.com/IBM/fp-go/v2/boolean"
"github.com/IBM/fp-go/v2/eq"
"github.com/IBM/fp-go/v2/option"
"github.com/IBM/fp-go/v2/reader"
"github.com/IBM/fp-go/v2/result"
"github.com/stretchr/testify/assert"
)
var (
// Eq is the equal predicate checking if objects are equal
Eq = eq.FromEquals(assert.ObjectsAreEqual)
)
// wrap1 is an internal helper function that wraps testify assertion functions
// into the Reader monad pattern with curried parameters.
//
// It takes a testify assertion function and converts it into a curried function
// that first takes an expected value, then an actual value, and finally returns
// a Reader that performs the assertion when given a *testing.T.
//
// Parameters:
// - wrapped: The testify assertion function to wrap
// - expected: The expected value for comparison
// - msgAndArgs: Optional message and arguments for assertion failure
//
// Returns:
// - A Kleisli function that takes the actual value and returns a Reader
func wrap1[T any](wrapped func(t assert.TestingT, expected, actual any, msgAndArgs ...any) bool, expected T, msgAndArgs ...any) Kleisli[T] {
return func(actual T) Reader {
return func(t *testing.T) bool {
return wrapped(t, expected, actual, msgAndArgs...)
}
}
}
// NotEqual tests if the expected and the actual values are not equal.
//
// This function follows the "data last" principle - you provide the expected value first,
// then the actual value, and finally the testing.T context.
//
// Example:
//
// func TestNotEqual(t *testing.T) {
// value := 42
// assert.NotEqual(10)(value)(t) // Passes: 42 != 10
// assert.NotEqual(42)(value)(t) // Fails: 42 == 42
// }
func NotEqual[T any](expected T) Kleisli[T] {
return wrap1(assert.NotEqual, expected)
}
// Equal tests if the expected and the actual values are equal.
//
// This is one of the most commonly used assertions. It follows the "data last" principle -
// you provide the expected value first, then the actual value, and finally the testing.T context.
//
// Example:
//
// func TestEqual(t *testing.T) {
// result := 2 + 2
// assert.Equal(4)(result)(t) // Passes
//
// name := "Alice"
// assert.Equal("Alice")(name)(t) // Passes
//
// // Can be composed with other assertions
// user := User{Name: "Bob", Age: 30}
// assertions := assert.AllOf([]assert.Reader{
// assert.Equal("Bob")(user.Name),
// assert.Equal(30)(user.Age),
// })
// assertions(t)
// }
func Equal[T any](expected T) Kleisli[T] {
return wrap1(assert.Equal, expected)
}
// ArrayNotEmpty checks if an array is not empty.
//
// Example:
//
// func TestArrayNotEmpty(t *testing.T) {
// numbers := []int{1, 2, 3}
// assert.ArrayNotEmpty(numbers)(t) // Passes
//
// empty := []int{}
// assert.ArrayNotEmpty(empty)(t) // Fails
// }
func ArrayNotEmpty[T any](arr []T) Reader {
return func(t *testing.T) bool {
return assert.NotEmpty(t, arr)
}
}
// RecordNotEmpty checks if a map is not empty.
//
// Example:
//
// func TestRecordNotEmpty(t *testing.T) {
// config := map[string]int{"timeout": 30, "retries": 3}
// assert.RecordNotEmpty(config)(t) // Passes
//
// empty := map[string]int{}
// assert.RecordNotEmpty(empty)(t) // Fails
// }
func RecordNotEmpty[K comparable, T any](mp map[K]T) Reader {
return func(t *testing.T) bool {
return assert.NotEmpty(t, mp)
}
}
// StringNotEmpty checks if a string is not empty.
//
// Example:
//
// func TestStringNotEmpty(t *testing.T) {
// message := "Hello, World!"
// assert.StringNotEmpty(message)(t) // Passes
//
// empty := ""
// assert.StringNotEmpty(empty)(t) // Fails
// }
func StringNotEmpty(s string) Reader {
return func(t *testing.T) bool {
return assert.NotEmpty(t, s)
}
}
// ArrayLength tests if an array has the expected length.
//
// Example:
//
// func TestArrayLength(t *testing.T) {
// numbers := []int{1, 2, 3, 4, 5}
// assert.ArrayLength[int](5)(numbers)(t) // Passes
// assert.ArrayLength[int](3)(numbers)(t) // Fails
// }
func ArrayLength[T any](expected int) Kleisli[[]T] {
return func(actual []T) Reader {
return func(t *testing.T) bool {
return assert.Len(t, actual, expected)
}
}
}
// RecordLength tests if a map has the expected length.
//
// Example:
//
// func TestRecordLength(t *testing.T) {
// config := map[string]string{"host": "localhost", "port": "8080"}
// assert.RecordLength[string, string](2)(config)(t) // Passes
// assert.RecordLength[string, string](3)(config)(t) // Fails
// }
func RecordLength[K comparable, T any](expected int) Kleisli[map[K]T] {
return func(actual map[K]T) Reader {
return func(t *testing.T) bool {
return assert.Len(t, actual, expected)
}
}
}
// StringLength tests if a string has the expected length.
//
// Example:
//
// func TestStringLength(t *testing.T) {
// message := "Hello"
// assert.StringLength[any, any](5)(message)(t) // Passes
// assert.StringLength[any, any](10)(message)(t) // Fails
// }
func StringLength[K comparable, T any](expected int) Kleisli[string] {
return func(actual string) Reader {
return func(t *testing.T) bool {
return assert.Len(t, actual, expected)
}
}
}
// NoError validates that there is no error.
//
// This is commonly used to assert that operations complete successfully.
//
// Example:
//
// func TestNoError(t *testing.T) {
// err := doSomething()
// assert.NoError(err)(t) // Passes if err is nil
//
// // Can be used with result types
// result := result.TryCatch(func() (int, error) {
// return 42, nil
// })
// assert.Success(result)(t) // Uses NoError internally
// }
func NoError(err error) Reader {
return func(t *testing.T) bool {
return assert.NoError(t, err)
}
}
// Error validates that there is an error.
//
// This is used to assert that operations fail as expected.
//
// Example:
//
// func TestError(t *testing.T) {
// err := validateInput("")
// assert.Error(err)(t) // Passes if err is not nil
//
// err2 := validateInput("valid")
// assert.Error(err2)(t) // Fails if err2 is nil
// }
func Error(err error) Reader {
return func(t *testing.T) bool {
return assert.Error(t, err)
}
}
// Success checks if a [Result] represents success.
//
// This is a convenience function for testing Result types from the fp-go library.
//
// Example:
//
// func TestSuccess(t *testing.T) {
// res := result.Of[int](42)
// assert.Success(res)(t) // Passes
//
// failedRes := result.Error[int](errors.New("failed"))
// assert.Success(failedRes)(t) // Fails
// }
func Success[T any](res Result[T]) Reader {
return NoError(result.ToError(res))
}
// Failure checks if a [Result] represents failure.
//
// This is a convenience function for testing Result types from the fp-go library.
//
// Example:
//
// func TestFailure(t *testing.T) {
// res := result.Error[int](errors.New("something went wrong"))
// assert.Failure(res)(t) // Passes
//
// successRes := result.Of[int](42)
// assert.Failure(successRes)(t) // Fails
// }
func Failure[T any](res Result[T]) Reader {
return Error(result.ToError(res))
}
// ArrayContains tests if a value is contained in an array.
//
// Example:
//
// func TestArrayContains(t *testing.T) {
// numbers := []int{1, 2, 3, 4, 5}
// assert.ArrayContains(3)(numbers)(t) // Passes
// assert.ArrayContains(10)(numbers)(t) // Fails
//
// names := []string{"Alice", "Bob", "Charlie"}
// assert.ArrayContains("Bob")(names)(t) // Passes
// }
func ArrayContains[T any](expected T) Kleisli[[]T] {
return func(actual []T) Reader {
return func(t *testing.T) bool {
return assert.Contains(t, actual, expected)
}
}
}
// ContainsKey tests if a key is contained in a map.
//
// Example:
//
// func TestContainsKey(t *testing.T) {
// config := map[string]int{"timeout": 30, "retries": 3}
// assert.ContainsKey[int]("timeout")(config)(t) // Passes
// assert.ContainsKey[int]("maxSize")(config)(t) // Fails
// }
func ContainsKey[T any, K comparable](expected K) Kleisli[map[K]T] {
return func(actual map[K]T) Reader {
return func(t *testing.T) bool {
return assert.Contains(t, actual, expected)
}
}
}
// NotContainsKey tests if a key is not contained in a map.
//
// Example:
//
// func TestNotContainsKey(t *testing.T) {
// config := map[string]int{"timeout": 30, "retries": 3}
// assert.NotContainsKey[int]("maxSize")(config)(t) // Passes
// assert.NotContainsKey[int]("timeout")(config)(t) // Fails
// }
func NotContainsKey[T any, K comparable](expected K) Kleisli[map[K]T] {
return func(actual map[K]T) Reader {
return func(t *testing.T) bool {
return assert.NotContains(t, actual, expected)
}
}
}
// That asserts that a particular predicate matches.
//
// This is a powerful function that allows you to create custom assertions using predicates.
//
// Example:
//
// func TestThat(t *testing.T) {
// // Test if a number is positive
// isPositive := N.MoreThan(0)
// assert.That(isPositive)(42)(t) // Passes
// assert.That(isPositive)(-5)(t) // Fails
//
// // Test if a string is uppercase
// isUppercase := func(s string) bool { return s == strings.ToUpper(s) }
// assert.That(isUppercase)("HELLO")(t) // Passes
// assert.That(isUppercase)("Hello")(t) // Fails
//
// // Can be combined with Local for property testing
// type User struct { Age int }
// ageIsAdult := assert.Local(func(u User) int { return u.Age })(
// assert.That(func(age int) bool { return age >= 18 }),
// )
// user := User{Age: 25}
// ageIsAdult(user)(t) // Passes
// }
func That[T any](pred Predicate[T]) Kleisli[T] {
return func(a T) Reader {
return func(t *testing.T) bool {
if pred(a) {
return true
}
return assert.Fail(t, fmt.Sprintf("Preficate %v does not match value %v", pred, a))
}
}
}
// AllOf combines multiple assertion Readers into a single Reader that passes
// only if all assertions pass.
//
// This function uses boolean AND logic (MonoidAll) to combine the results of
// all assertions. If any assertion fails, the combined assertion fails.
//
// This is useful for grouping related assertions together and ensuring all
// conditions are met.
//
// Parameters:
// - readers: Array of assertion Readers to combine
//
// Returns:
// - A single Reader that performs all assertions and returns true only if all pass
//
// Example:
//
// func TestUser(t *testing.T) {
// user := User{Name: "Alice", Age: 30, Active: true}
// assertions := assert.AllOf([]assert.Reader{
// assert.Equal("Alice")(user.Name),
// assert.Equal(30)(user.Age),
// assert.Equal(true)(user.Active),
// })
// assertions(t)
// }
//
//go:inline
func AllOf(readers []Reader) Reader {
return reader.MonadReduceArrayM(readers, boolean.MonoidAll)
}
// RunAll executes a map of named test cases, running each as a subtest.
//
// This function creates a Reader that runs multiple named test cases using
// Go's t.Run for proper test isolation and reporting. Each test case is
// executed as a separate subtest with its own name.
//
// The function returns true only if all subtests pass. This allows for
// better test organization and clearer test output.
//
// Parameters:
// - testcases: Map of test names to assertion Readers
//
// Returns:
// - A Reader that executes all named test cases and returns true if all pass
//
// Example:
//
// func TestMathOperations(t *testing.T) {
// testcases := map[string]assert.Reader{
// "addition": assert.Equal(4)(2 + 2),
// "multiplication": assert.Equal(6)(2 * 3),
// "subtraction": assert.Equal(1)(3 - 2),
// }
// assert.RunAll(testcases)(t)
// }
//
//go:inline
func RunAll(testcases map[string]Reader) Reader {
return func(t *testing.T) bool {
current := true
for k, r := range testcases {
current = current && t.Run(k, func(t1 *testing.T) {
r(t1)
})
}
return current
}
}
// Local transforms a Reader that works on type R1 into a Reader that works on type R2,
// by providing a function that converts R2 to R1. This allows you to focus a test on a
// specific property or subset of a larger data structure.
//
// This is particularly useful when you have an assertion that operates on a specific field
// or property, and you want to apply it to a complete object. Instead of extracting the
// property and then asserting on it, you can transform the assertion to work directly
// on the whole object.
//
// Parameters:
// - f: A function that extracts or transforms R2 into R1
//
// Returns:
// - A function that transforms a Reader[R1, Reader] into a Reader[R2, Reader]
//
// Example:
//
// type User struct {
// Name string
// Age int
// }
//
// // Create an assertion that checks if age is positive
// ageIsPositive := assert.That(func(age int) bool { return age > 0 })
//
// // Focus this assertion on the Age field of User
// userAgeIsPositive := assert.Local(func(u User) int { return u.Age })(ageIsPositive)
//
// // Now we can test the whole User object
// user := User{Name: "Alice", Age: 30}
// userAgeIsPositive(user)(t)
//
//go:inline
func Local[R1, R2 any](f func(R2) R1) func(Kleisli[R1]) Kleisli[R2] {
return reader.Local[Reader](f)
}
// LocalL is similar to Local but uses a Lens to focus on a specific property.
// A Lens is a functional programming construct that provides a composable way to
// focus on a part of a data structure.
//
// This function is particularly useful when you want to focus a test on a specific
// field of a struct using a lens, making the code more declarative and composable.
// Lenses are often code-generated or predefined for common data structures.
//
// Parameters:
// - l: A Lens that focuses from type S to type T
//
// Returns:
// - A function that transforms a Reader[T, Reader] into a Reader[S, Reader]
//
// Example:
//
// type Person struct {
// Name string
// Email string
// }
//
// // Assume we have a lens that focuses on the Email field
// var emailLens = lens.Prop[Person, string]("Email")
//
// // Create an assertion for email format
// validEmail := assert.That(func(email string) bool {
// return strings.Contains(email, "@")
// })
//
// // Focus this assertion on the Email property using a lens
// validPersonEmail := assert.LocalL(emailLens)(validEmail)
//
// // Test a Person object
// person := Person{Name: "Bob", Email: "bob@example.com"}
// validPersonEmail(person)(t)
//
//go:inline
func LocalL[S, T any](l Lens[S, T]) func(Kleisli[T]) Kleisli[S] {
return reader.Local[Reader](l.Get)
}
// fromOptionalGetter is an internal helper that creates an assertion Reader from
// an optional getter function. It asserts that the optional value is present (Some).
func fromOptionalGetter[S, T any](getter func(S) option.Option[T], msgAndArgs ...any) Kleisli[S] {
return func(s S) Reader {
return func(t *testing.T) bool {
return assert.True(t, option.IsSome(getter(s)), msgAndArgs...)
}
}
}
// FromOptional creates an assertion that checks if an Optional can successfully extract a value.
// An Optional is an optic that represents an optional reference to a subpart of a data structure.
//
// This function is useful when you have an Optional optic and want to assert that the optional
// value is present (Some) rather than absent (None). The assertion passes if the Optional's
// GetOption returns Some, and fails if it returns None.
//
// This enables property-focused testing where you verify that a particular optional field or
// sub-structure exists and is accessible.
//
// Parameters:
// - opt: An Optional optic that focuses from type S to type T
//
// Returns:
// - A Reader that asserts the optional value is present when applied to a value of type S
//
// Example:
//
// type Config struct {
// Database *DatabaseConfig // Optional field
// }
//
// type DatabaseConfig struct {
// Host string
// Port int
// }
//
// // Create an Optional that focuses on the Database field
// dbOptional := optional.MakeOptional(
// func(c Config) option.Option[*DatabaseConfig] {
// if c.Database != nil {
// return option.Some(c.Database)
// }
// return option.None[*DatabaseConfig]()
// },
// func(c Config, db *DatabaseConfig) Config {
// c.Database = db
// return c
// },
// )
//
// // Assert that the database config is present
// hasDatabaseConfig := assert.FromOptional(dbOptional)
//
// config := Config{Database: &DatabaseConfig{Host: "localhost", Port: 5432}}
// hasDatabaseConfig(config)(t) // Passes
//
// emptyConfig := Config{Database: nil}
// hasDatabaseConfig(emptyConfig)(t) // Fails
//
//go:inline
func FromOptional[S, T any](opt Optional[S, T]) reader.Reader[S, Reader] {
return fromOptionalGetter(opt.GetOption, "Optional: %s", opt)
}
// FromPrism creates an assertion that checks if a Prism can successfully extract a value.
// A Prism is an optic used to select part of a sum type (tagged union or variant).
//
// This function is useful when you have a Prism optic and want to assert that a value
// matches a specific variant of a sum type. The assertion passes if the Prism's GetOption
// returns Some (meaning the value is of the expected variant), and fails if it returns None
// (meaning the value is a different variant).
//
// This enables variant-focused testing where you verify that a value is of a particular
// type or matches a specific condition within a sum type.
//
// Parameters:
// - p: A Prism optic that focuses from type S to type T
//
// Returns:
// - A Reader that asserts the prism successfully extracts when applied to a value of type S
//
// Example:
//
// type Result interface{ isResult() }
// type Success struct{ Value int }
// type Failure struct{ Error string }
//
// func (Success) isResult() {}
// func (Failure) isResult() {}
//
// // Create a Prism that focuses on Success variant
// successPrism := prism.MakePrism(
// func(r Result) option.Option[int] {
// if s, ok := r.(Success); ok {
// return option.Some(s.Value)
// }
// return option.None[int]()
// },
// func(v int) Result { return Success{Value: v} },
// )
//
// // Assert that the result is a Success
// isSuccess := assert.FromPrism(successPrism)
//
// result1 := Success{Value: 42}
// isSuccess(result1)(t) // Passes
//
// result2 := Failure{Error: "something went wrong"}
// isSuccess(result2)(t) // Fails
//
//go:inline
func FromPrism[S, T any](p Prism[S, T]) reader.Reader[S, Reader] {
return fromOptionalGetter(p.GetOption, "Prism: %s", p)
}

View File

@@ -16,94 +16,677 @@
package assert
import (
"fmt"
"errors"
"testing"
"github.com/IBM/fp-go/v2/eq"
"github.com/IBM/fp-go/v2/optics/prism"
"github.com/IBM/fp-go/v2/option"
"github.com/IBM/fp-go/v2/result"
"github.com/stretchr/testify/assert"
S "github.com/IBM/fp-go/v2/string"
)
var (
errTest = fmt.Errorf("test failure")
// Eq is the equal predicate checking if objects are equal
Eq = eq.FromEquals(assert.ObjectsAreEqual)
)
func wrap1[T any](wrapped func(t assert.TestingT, expected, actual any, msgAndArgs ...any) bool, t *testing.T, expected T) result.Kleisli[T, T] {
return func(actual T) Result[T] {
ok := wrapped(t, expected, actual)
if ok {
return result.Of(actual)
func TestEqual(t *testing.T) {
t.Run("should pass when values are equal", func(t *testing.T) {
result := Equal(42)(42)(t)
if !result {
t.Error("Expected Equal to pass for equal values")
}
return result.Left[T](errTest)
}
}
})
// NotEqual tests if the expected and the actual values are not equal
func NotEqual[T any](t *testing.T, expected T) result.Kleisli[T, T] {
return wrap1(assert.NotEqual, t, expected)
}
// Equal tests if the expected and the actual values are equal
func Equal[T any](t *testing.T, expected T) result.Kleisli[T, T] {
return wrap1(assert.Equal, t, expected)
}
// Length tests if an array has the expected length
func Length[T any](t *testing.T, expected int) result.Kleisli[[]T, []T] {
return func(actual []T) Result[[]T] {
ok := assert.Len(t, actual, expected)
if ok {
return result.Of(actual)
t.Run("should fail when values are not equal", func(t *testing.T) {
mockT := &testing.T{}
result := Equal(42)(43)(mockT)
if result {
t.Error("Expected Equal to fail for different values")
}
return result.Left[[]T](errTest)
}
})
t.Run("should work with strings", func(t *testing.T) {
result := Equal("hello")("hello")(t)
if !result {
t.Error("Expected Equal to pass for equal strings")
}
})
}
// NoError validates that there is no error
func NoError[T any](t *testing.T) result.Operator[T, T] {
return func(actual Result[T]) Result[T] {
return result.MonadFold(actual, func(e error) Result[T] {
assert.NoError(t, e)
return result.Left[T](e)
}, func(value T) Result[T] {
assert.NoError(t, nil)
return result.Of(value)
func TestNotEqual(t *testing.T) {
t.Run("should pass when values are not equal", func(t *testing.T) {
result := NotEqual(42)(43)(t)
if !result {
t.Error("Expected NotEqual to pass for different values")
}
})
t.Run("should fail when values are equal", func(t *testing.T) {
mockT := &testing.T{}
result := NotEqual(42)(42)(mockT)
if result {
t.Error("Expected NotEqual to fail for equal values")
}
})
}
func TestArrayNotEmpty(t *testing.T) {
t.Run("should pass for non-empty array", func(t *testing.T) {
arr := []int{1, 2, 3}
result := ArrayNotEmpty(arr)(t)
if !result {
t.Error("Expected ArrayNotEmpty to pass for non-empty array")
}
})
t.Run("should fail for empty array", func(t *testing.T) {
mockT := &testing.T{}
arr := []int{}
result := ArrayNotEmpty(arr)(mockT)
if result {
t.Error("Expected ArrayNotEmpty to fail for empty array")
}
})
}
func TestRecordNotEmpty(t *testing.T) {
t.Run("should pass for non-empty map", func(t *testing.T) {
mp := map[string]int{"a": 1, "b": 2}
result := RecordNotEmpty(mp)(t)
if !result {
t.Error("Expected RecordNotEmpty to pass for non-empty map")
}
})
t.Run("should fail for empty map", func(t *testing.T) {
mockT := &testing.T{}
mp := map[string]int{}
result := RecordNotEmpty(mp)(mockT)
if result {
t.Error("Expected RecordNotEmpty to fail for empty map")
}
})
}
func TestArrayLength(t *testing.T) {
t.Run("should pass when length matches", func(t *testing.T) {
arr := []int{1, 2, 3}
result := ArrayLength[int](3)(arr)(t)
if !result {
t.Error("Expected ArrayLength to pass when length matches")
}
})
t.Run("should fail when length doesn't match", func(t *testing.T) {
mockT := &testing.T{}
arr := []int{1, 2, 3}
result := ArrayLength[int](5)(arr)(mockT)
if result {
t.Error("Expected ArrayLength to fail when length doesn't match")
}
})
t.Run("should work with empty array", func(t *testing.T) {
arr := []string{}
result := ArrayLength[string](0)(arr)(t)
if !result {
t.Error("Expected ArrayLength to pass for empty array with expected length 0")
}
})
}
func TestRecordLength(t *testing.T) {
t.Run("should pass when map length matches", func(t *testing.T) {
mp := map[string]int{"a": 1, "b": 2}
result := RecordLength[string, int](2)(mp)(t)
if !result {
t.Error("Expected RecordLength to pass when length matches")
}
})
t.Run("should fail when map length doesn't match", func(t *testing.T) {
mockT := &testing.T{}
mp := map[string]int{"a": 1}
result := RecordLength[string, int](3)(mp)(mockT)
if result {
t.Error("Expected RecordLength to fail when length doesn't match")
}
})
}
func TestStringLength(t *testing.T) {
t.Run("should pass when string length matches", func(t *testing.T) {
str := "hello"
result := StringLength[string, int](5)(str)(t)
if !result {
t.Error("Expected StringLength to pass when length matches")
}
})
t.Run("should fail when string length doesn't match", func(t *testing.T) {
mockT := &testing.T{}
str := "hello"
result := StringLength[string, int](10)(str)(mockT)
if result {
t.Error("Expected StringLength to fail when length doesn't match")
}
})
t.Run("should work with empty string", func(t *testing.T) {
str := ""
result := StringLength[string, int](0)(str)(t)
if !result {
t.Error("Expected StringLength to pass for empty string with expected length 0")
}
})
}
func TestNoError(t *testing.T) {
t.Run("should pass when error is nil", func(t *testing.T) {
result := NoError(nil)(t)
if !result {
t.Error("Expected NoError to pass when error is nil")
}
})
t.Run("should fail when error is not nil", func(t *testing.T) {
mockT := &testing.T{}
err := errors.New("test error")
result := NoError(err)(mockT)
if result {
t.Error("Expected NoError to fail when error is not nil")
}
})
}
func TestError(t *testing.T) {
t.Run("should pass when error is not nil", func(t *testing.T) {
err := errors.New("test error")
result := Error(err)(t)
if !result {
t.Error("Expected Error to pass when error is not nil")
}
})
t.Run("should fail when error is nil", func(t *testing.T) {
mockT := &testing.T{}
result := Error(nil)(mockT)
if result {
t.Error("Expected Error to fail when error is nil")
}
})
}
func TestSuccess(t *testing.T) {
t.Run("should pass for successful result", func(t *testing.T) {
res := result.Of(42)
result := Success(res)(t)
if !result {
t.Error("Expected Success to pass for successful result")
}
})
t.Run("should fail for error result", func(t *testing.T) {
mockT := &testing.T{}
res := result.Left[int](errors.New("test error"))
result := Success(res)(mockT)
if result {
t.Error("Expected Success to fail for error result")
}
})
}
func TestFailure(t *testing.T) {
t.Run("should pass for error result", func(t *testing.T) {
res := result.Left[int](errors.New("test error"))
result := Failure(res)(t)
if !result {
t.Error("Expected Failure to pass for error result")
}
})
t.Run("should fail for successful result", func(t *testing.T) {
mockT := &testing.T{}
res := result.Of(42)
result := Failure(res)(mockT)
if result {
t.Error("Expected Failure to fail for successful result")
}
})
}
func TestArrayContains(t *testing.T) {
t.Run("should pass when element is in array", func(t *testing.T) {
arr := []int{1, 2, 3, 4, 5}
result := ArrayContains(3)(arr)(t)
if !result {
t.Error("Expected ArrayContains to pass when element is in array")
}
})
t.Run("should fail when element is not in array", func(t *testing.T) {
mockT := &testing.T{}
arr := []int{1, 2, 3}
result := ArrayContains(10)(arr)(mockT)
if result {
t.Error("Expected ArrayContains to fail when element is not in array")
}
})
t.Run("should work with strings", func(t *testing.T) {
arr := []string{"apple", "banana", "cherry"}
result := ArrayContains("banana")(arr)(t)
if !result {
t.Error("Expected ArrayContains to pass for string element")
}
})
}
func TestContainsKey(t *testing.T) {
t.Run("should pass when key exists in map", func(t *testing.T) {
mp := map[string]int{"a": 1, "b": 2, "c": 3}
result := ContainsKey[int]("b")(mp)(t)
if !result {
t.Error("Expected ContainsKey to pass when key exists")
}
})
t.Run("should fail when key doesn't exist in map", func(t *testing.T) {
mockT := &testing.T{}
mp := map[string]int{"a": 1, "b": 2}
result := ContainsKey[int]("z")(mp)(mockT)
if result {
t.Error("Expected ContainsKey to fail when key doesn't exist")
}
})
}
func TestNotContainsKey(t *testing.T) {
t.Run("should pass when key doesn't exist in map", func(t *testing.T) {
mp := map[string]int{"a": 1, "b": 2}
result := NotContainsKey[int]("z")(mp)(t)
if !result {
t.Error("Expected NotContainsKey to pass when key doesn't exist")
}
})
t.Run("should fail when key exists in map", func(t *testing.T) {
mockT := &testing.T{}
mp := map[string]int{"a": 1, "b": 2}
result := NotContainsKey[int]("a")(mp)(mockT)
if result {
t.Error("Expected NotContainsKey to fail when key exists")
}
})
}
func TestThat(t *testing.T) {
t.Run("should pass when predicate is true", func(t *testing.T) {
isEven := func(n int) bool { return n%2 == 0 }
result := That(isEven)(42)(t)
if !result {
t.Error("Expected That to pass when predicate is true")
}
})
t.Run("should fail when predicate is false", func(t *testing.T) {
mockT := &testing.T{}
isEven := func(n int) bool { return n%2 == 0 }
result := That(isEven)(43)(mockT)
if result {
t.Error("Expected That to fail when predicate is false")
}
})
t.Run("should work with string predicates", func(t *testing.T) {
startsWithH := func(s string) bool { return S.IsNonEmpty(s) && s[0] == 'h' }
result := That(startsWithH)("hello")(t)
if !result {
t.Error("Expected That to pass for string predicate")
}
})
}
func TestAllOf(t *testing.T) {
t.Run("should pass when all assertions pass", func(t *testing.T) {
assertions := AllOf([]Reader{
Equal(42)(42),
Equal("hello")("hello"),
ArrayNotEmpty([]int{1, 2, 3}),
})
}
result := assertions(t)
if !result {
t.Error("Expected AllOf to pass when all assertions pass")
}
})
t.Run("should fail when any assertion fails", func(t *testing.T) {
mockT := &testing.T{}
assertions := AllOf([]Reader{
Equal(42)(42),
Equal("hello")("goodbye"),
ArrayNotEmpty([]int{1, 2, 3}),
})
result := assertions(mockT)
if result {
t.Error("Expected AllOf to fail when any assertion fails")
}
})
t.Run("should work with empty array", func(t *testing.T) {
assertions := AllOf([]Reader{})
result := assertions(t)
if !result {
t.Error("Expected AllOf to pass for empty array")
}
})
t.Run("should combine multiple array assertions", func(t *testing.T) {
arr := []int{1, 2, 3, 4, 5}
assertions := AllOf([]Reader{
ArrayNotEmpty(arr),
ArrayLength[int](5)(arr),
ArrayContains(3)(arr),
})
result := assertions(t)
if !result {
t.Error("Expected AllOf to pass for multiple array assertions")
}
})
}
// ArrayContains tests if a value is contained in an array
func ArrayContains[T any](t *testing.T, expected T) result.Kleisli[[]T, []T] {
return func(actual []T) Result[[]T] {
ok := assert.Contains(t, actual, expected)
if ok {
return result.Of(actual)
func TestRunAll(t *testing.T) {
t.Run("should run all named test cases", func(t *testing.T) {
testcases := map[string]Reader{
"equality": Equal(42)(42),
"string_check": Equal("test")("test"),
"array_check": ArrayNotEmpty([]int{1, 2, 3}),
}
return result.Left[[]T](errTest)
}
result := RunAll(testcases)(t)
if !result {
t.Error("Expected RunAll to pass when all test cases pass")
}
})
// Note: Testing failure behavior of RunAll is tricky because subtests
// will actually fail in the test framework. The function works correctly
// as demonstrated by the passing test above.
t.Run("should work with empty test cases", func(t *testing.T) {
testcases := map[string]Reader{}
result := RunAll(testcases)(t)
if !result {
t.Error("Expected RunAll to pass for empty test cases")
}
})
}
// ContainsKey tests if a key is contained in a map
func ContainsKey[T any, K comparable](t *testing.T, expected K) result.Kleisli[map[K]T, map[K]T] {
return func(actual map[K]T) Result[map[K]T] {
ok := assert.Contains(t, actual, expected)
if ok {
return result.Of(actual)
func TestEq(t *testing.T) {
t.Run("should return true for equal values", func(t *testing.T) {
if !Eq.Equals(42, 42) {
t.Error("Expected Eq to return true for equal integers")
}
return result.Left[map[K]T](errTest)
}
})
t.Run("should return false for different values", func(t *testing.T) {
if Eq.Equals(42, 43) {
t.Error("Expected Eq to return false for different integers")
}
})
t.Run("should work with strings", func(t *testing.T) {
if !Eq.Equals("hello", "hello") {
t.Error("Expected Eq to return true for equal strings")
}
if Eq.Equals("hello", "world") {
t.Error("Expected Eq to return false for different strings")
}
})
t.Run("should work with slices", func(t *testing.T) {
arr1 := []int{1, 2, 3}
arr2 := []int{1, 2, 3}
if !Eq.Equals(arr1, arr2) {
t.Error("Expected Eq to return true for equal slices")
}
})
}
// NotContainsKey tests if a key is not contained in a map
func NotContainsKey[T any, K comparable](t *testing.T, expected K) result.Kleisli[map[K]T, map[K]T] {
return func(actual map[K]T) Result[map[K]T] {
ok := assert.NotContains(t, actual, expected)
if ok {
return result.Of(actual)
}
return result.Left[map[K]T](errTest)
func TestLocal(t *testing.T) {
type User struct {
Name string
Age int
}
t.Run("should focus assertion on a property", func(t *testing.T) {
// Create an assertion that checks if age is positive
ageIsPositive := That(func(age int) bool { return age > 0 })
// Focus this assertion on the Age field of User
userAgeIsPositive := Local(func(u User) int { return u.Age })(ageIsPositive)
// Test with a user who has a positive age
user := User{Name: "Alice", Age: 30}
result := userAgeIsPositive(user)(t)
if !result {
t.Error("Expected focused assertion to pass for positive age")
}
})
t.Run("should fail when focused property doesn't match", func(t *testing.T) {
mockT := &testing.T{}
ageIsPositive := That(func(age int) bool { return age > 0 })
userAgeIsPositive := Local(func(u User) int { return u.Age })(ageIsPositive)
// Test with a user who has zero age
user := User{Name: "Bob", Age: 0}
result := userAgeIsPositive(user)(mockT)
if result {
t.Error("Expected focused assertion to fail for zero age")
}
})
t.Run("should compose with other assertions", func(t *testing.T) {
// Create multiple focused assertions
nameNotEmpty := Local(func(u User) string { return u.Name })(
That(S.IsNonEmpty),
)
ageInRange := Local(func(u User) int { return u.Age })(
That(func(age int) bool { return age >= 18 && age <= 100 }),
)
user := User{Name: "Charlie", Age: 25}
assertions := AllOf([]Reader{
nameNotEmpty(user),
ageInRange(user),
})
result := assertions(t)
if !result {
t.Error("Expected composed focused assertions to pass")
}
})
t.Run("should work with Equal assertion", func(t *testing.T) {
// Focus Equal assertion on Name field
nameIsAlice := Local(func(u User) string { return u.Name })(Equal("Alice"))
user := User{Name: "Alice", Age: 30}
result := nameIsAlice(user)(t)
if !result {
t.Error("Expected focused Equal assertion to pass")
}
})
}
func TestLocalL(t *testing.T) {
// Note: LocalL requires lens package which provides lens operations.
// This test demonstrates the concept, but actual usage would require
// proper lens definitions.
t.Run("conceptual test for LocalL", func(t *testing.T) {
// LocalL is similar to Local but uses lenses for focusing.
// It would be used like:
// validEmail := That(func(email string) bool { return strings.Contains(email, "@") })
// validPersonEmail := LocalL(emailLens)(validEmail)
//
// The actual implementation would require lens definitions from the lens package.
// This test serves as documentation for the intended usage.
})
}
func TestFromOptional(t *testing.T) {
type DatabaseConfig struct {
Host string
Port int
}
type Config struct {
Database *DatabaseConfig
}
// Create an Optional that focuses on the Database field
dbOptional := Optional[Config, *DatabaseConfig]{
GetOption: func(c Config) option.Option[*DatabaseConfig] {
if c.Database != nil {
return option.Of(c.Database)
}
return option.None[*DatabaseConfig]()
},
Set: func(db *DatabaseConfig) func(Config) Config {
return func(c Config) Config {
c.Database = db
return c
}
},
}
t.Run("should pass when optional value is present", func(t *testing.T) {
config := Config{Database: &DatabaseConfig{Host: "localhost", Port: 5432}}
hasDatabaseConfig := FromOptional(dbOptional)
result := hasDatabaseConfig(config)(t)
if !result {
t.Error("Expected FromOptional to pass when optional value is present")
}
})
t.Run("should fail when optional value is absent", func(t *testing.T) {
mockT := &testing.T{}
emptyConfig := Config{Database: nil}
hasDatabaseConfig := FromOptional(dbOptional)
result := hasDatabaseConfig(emptyConfig)(mockT)
if result {
t.Error("Expected FromOptional to fail when optional value is absent")
}
})
t.Run("should work with nested optionals", func(t *testing.T) {
type AdvancedSettings struct {
Cache bool
}
type Settings struct {
Advanced *AdvancedSettings
}
advancedOptional := Optional[Settings, *AdvancedSettings]{
GetOption: func(s Settings) option.Option[*AdvancedSettings] {
if s.Advanced != nil {
return option.Of(s.Advanced)
}
return option.None[*AdvancedSettings]()
},
Set: func(adv *AdvancedSettings) func(Settings) Settings {
return func(s Settings) Settings {
s.Advanced = adv
return s
}
},
}
settings := Settings{Advanced: &AdvancedSettings{Cache: true}}
hasAdvanced := FromOptional(advancedOptional)
result := hasAdvanced(settings)(t)
if !result {
t.Error("Expected FromOptional to pass for nested optional")
}
})
}
// Helper types for Prism testing
type PrismTestResult interface {
isPrismTestResult()
}
type PrismTestSuccess struct {
Value int
}
type PrismTestFailure struct {
Error string
}
func (PrismTestSuccess) isPrismTestResult() {}
func (PrismTestFailure) isPrismTestResult() {}
func TestFromPrism(t *testing.T) {
// Create a Prism that focuses on Success variant using prism.MakePrism
successPrism := prism.MakePrism(
func(r PrismTestResult) option.Option[int] {
if s, ok := r.(PrismTestSuccess); ok {
return option.Of(s.Value)
}
return option.None[int]()
},
func(v int) PrismTestResult {
return PrismTestSuccess{Value: v}
},
)
// Create a Prism that focuses on Failure variant
failurePrism := prism.MakePrism(
func(r PrismTestResult) option.Option[string] {
if f, ok := r.(PrismTestFailure); ok {
return option.Of(f.Error)
}
return option.None[string]()
},
func(err string) PrismTestResult {
return PrismTestFailure{Error: err}
},
)
t.Run("should pass when prism successfully extracts", func(t *testing.T) {
result := PrismTestSuccess{Value: 42}
isSuccess := FromPrism(successPrism)
testResult := isSuccess(result)(t)
if !testResult {
t.Error("Expected FromPrism to pass when prism extracts successfully")
}
})
t.Run("should fail when prism cannot extract", func(t *testing.T) {
mockT := &testing.T{}
result := PrismTestFailure{Error: "something went wrong"}
isSuccess := FromPrism(successPrism)
testResult := isSuccess(result)(mockT)
if testResult {
t.Error("Expected FromPrism to fail when prism cannot extract")
}
})
t.Run("should work with failure prism", func(t *testing.T) {
result := PrismTestFailure{Error: "test error"}
isFailure := FromPrism(failurePrism)
testResult := isFailure(result)(t)
if !testResult {
t.Error("Expected FromPrism to pass for failure prism on failure result")
}
})
t.Run("should fail with failure prism on success result", func(t *testing.T) {
mockT := &testing.T{}
result := PrismTestSuccess{Value: 100}
isFailure := FromPrism(failurePrism)
testResult := isFailure(result)(mockT)
if testResult {
t.Error("Expected FromPrism to fail for failure prism on success result")
}
})
}

236
v2/assert/example_test.go Normal file
View File

@@ -0,0 +1,236 @@
// Copyright (c) 2023 - 2025 IBM Corp.
// All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package assert_test
import (
"errors"
"strings"
"testing"
"github.com/IBM/fp-go/v2/assert"
N "github.com/IBM/fp-go/v2/number"
"github.com/IBM/fp-go/v2/result"
)
// Example_basicAssertions demonstrates basic equality and inequality assertions
func Example_basicAssertions() {
// This would be in a real test function
var t *testing.T // placeholder for example
// Basic equality
value := 42
assert.Equal(42)(value)(t)
// String equality
name := "Alice"
assert.Equal("Alice")(name)(t)
// Inequality
assert.NotEqual(10)(value)(t)
}
// Example_arrayAssertions demonstrates array-related assertions
func Example_arrayAssertions() {
var t *testing.T // placeholder for example
numbers := []int{1, 2, 3, 4, 5}
// Check array is not empty
assert.ArrayNotEmpty(numbers)(t)
// Check array length
assert.ArrayLength[int](5)(numbers)(t)
// Check array contains a value
assert.ArrayContains(3)(numbers)(t)
}
// Example_mapAssertions demonstrates map-related assertions
func Example_mapAssertions() {
var t *testing.T // placeholder for example
config := map[string]int{
"timeout": 30,
"retries": 3,
"maxSize": 1000,
}
// Check map is not empty
assert.RecordNotEmpty(config)(t)
// Check map length
assert.RecordLength[string, int](3)(config)(t)
// Check map contains key
assert.ContainsKey[int]("timeout")(config)(t)
// Check map does not contain key
assert.NotContainsKey[int]("unknown")(config)(t)
}
// Example_errorAssertions demonstrates error-related assertions
func Example_errorAssertions() {
var t *testing.T // placeholder for example
// Assert no error
err := doSomethingSuccessful()
assert.NoError(err)(t)
// Assert error exists
err2 := doSomethingThatFails()
assert.Error(err2)(t)
}
// Example_resultAssertions demonstrates Result type assertions
func Example_resultAssertions() {
var t *testing.T // placeholder for example
// Assert success
successResult := result.Of(42)
assert.Success(successResult)(t)
// Assert failure
failureResult := result.Left[int](errors.New("something went wrong"))
assert.Failure(failureResult)(t)
}
// Example_predicateAssertions demonstrates custom predicate assertions
func Example_predicateAssertions() {
var t *testing.T // placeholder for example
// Test if a number is positive
isPositive := N.MoreThan(0)
assert.That(isPositive)(42)(t)
// Test if a string is uppercase
isUppercase := func(s string) bool { return s == strings.ToUpper(s) }
assert.That(isUppercase)("HELLO")(t)
// Test if a number is even
isEven := func(n int) bool { return n%2 == 0 }
assert.That(isEven)(10)(t)
}
// Example_allOf demonstrates combining multiple assertions
func Example_allOf() {
var t *testing.T // placeholder for example
type User struct {
Name string
Age int
Active bool
}
user := User{Name: "Alice", Age: 30, Active: true}
// Combine multiple assertions
assertions := assert.AllOf([]assert.Reader{
assert.Equal("Alice")(user.Name),
assert.Equal(30)(user.Age),
assert.Equal(true)(user.Active),
})
assertions(t)
}
// Example_runAll demonstrates running named test cases
func Example_runAll() {
var t *testing.T // placeholder for example
testcases := map[string]assert.Reader{
"addition": assert.Equal(4)(2 + 2),
"multiplication": assert.Equal(6)(2 * 3),
"subtraction": assert.Equal(1)(3 - 2),
"division": assert.Equal(2)(10 / 5),
}
assert.RunAll(testcases)(t)
}
// Example_local demonstrates focusing assertions on specific properties
func Example_local() {
var t *testing.T // placeholder for example
type User struct {
Name string
Age int
}
// Create an assertion that checks if age is positive
ageIsPositive := assert.That(func(age int) bool { return age > 0 })
// Focus this assertion on the Age field of User
userAgeIsPositive := assert.Local(func(u User) int { return u.Age })(ageIsPositive)
// Now we can test the whole User object
user := User{Name: "Alice", Age: 30}
userAgeIsPositive(user)(t)
}
// Example_composableAssertions demonstrates building complex assertions
func Example_composableAssertions() {
var t *testing.T // placeholder for example
type Config struct {
Host string
Port int
Timeout int
Retries int
}
config := Config{
Host: "localhost",
Port: 8080,
Timeout: 30,
Retries: 3,
}
// Create focused assertions for each field
validHost := assert.Local(func(c Config) string { return c.Host })(
assert.StringNotEmpty,
)
validPort := assert.Local(func(c Config) int { return c.Port })(
assert.That(func(p int) bool { return p > 0 && p < 65536 }),
)
validTimeout := assert.Local(func(c Config) int { return c.Timeout })(
assert.That(func(t int) bool { return t > 0 }),
)
validRetries := assert.Local(func(c Config) int { return c.Retries })(
assert.That(func(r int) bool { return r >= 0 }),
)
// Combine all assertions
validConfig := assert.AllOf([]assert.Reader{
validHost(config),
validPort(config),
validTimeout(config),
validRetries(config),
})
validConfig(t)
}
// Helper functions for examples
func doSomethingSuccessful() error {
return nil
}
func doSomethingThatFails() error {
return errors.New("operation failed")
}

View File

@@ -1,7 +1,35 @@
package assert
import "github.com/IBM/fp-go/v2/result"
import (
"testing"
"github.com/IBM/fp-go/v2/optics/lens"
"github.com/IBM/fp-go/v2/optics/optional"
"github.com/IBM/fp-go/v2/optics/prism"
"github.com/IBM/fp-go/v2/predicate"
"github.com/IBM/fp-go/v2/reader"
"github.com/IBM/fp-go/v2/result"
)
type (
// Result represents a computation that may fail with an error.
Result[T any] = result.Result[T]
// Reader represents a test assertion that depends on a testing.T context and returns a boolean.
Reader = reader.Reader[*testing.T, bool]
// Kleisli represents a function that produces a test assertion Reader from a value of type T.
Kleisli[T any] = reader.Reader[T, Reader]
// Predicate represents a function that tests a value of type T and returns a boolean.
Predicate[T any] = predicate.Predicate[T]
// Lens is a functional reference to a subpart of a data structure.
Lens[S, T any] = lens.Lens[S, T]
// Optional is an optic that focuses on a value that may or may not be present.
Optional[S, T any] = optional.Optional[S, T]
// Prism is an optic that focuses on a case of a sum type.
Prism[S, T any] = prism.Prism[S, T]
)

View File

@@ -18,5 +18,7 @@ package boolean
import "github.com/IBM/fp-go/v2/monoid"
type (
// Monoid represents a monoid structure for boolean values.
// A monoid provides an associative binary operation and an identity element.
Monoid = monoid.Monoid[bool]
)

View File

@@ -8,5 +8,5 @@ import (
// BuilderPrism createa a [Prism] that converts between a builder and its type
func BuilderPrism[T any, B Builder[T]](creator func(T) B) Prism[B, T] {
return prism.MakePrism(F.Flow2(B.Build, result.ToOption[T]), creator)
return prism.MakePrismWithName(F.Flow2(B.Build, result.ToOption[T]), creator, "BuilderPrism")
}

View File

@@ -7,9 +7,14 @@ import (
)
type (
// Result represents a computation that may fail with an error.
// It's an alias for Either[error, T].
Result[T any] = result.Result[T]
// Prism is an optic that focuses on a case of a sum type.
// It provides a way to extract and construct values of a specific variant.
Prism[S, A any] = prism.Prism[S, A]
// Option represents an optional value that may or may not be present.
Option[T any] = option.Option[T]
)

View File

@@ -15,14 +15,163 @@
package bytes
// Empty returns an empty byte slice.
//
// This function returns the identity element for the byte slice Monoid,
// which is an empty byte slice. It's useful as a starting point for
// building byte slices or as a default value.
//
// Returns:
// - An empty byte slice ([]byte{})
//
// Properties:
// - Empty() is the identity element for Monoid.Concat
// - Monoid.Concat(Empty(), x) == x
// - Monoid.Concat(x, Empty()) == x
//
// Example - Basic usage:
//
// empty := Empty()
// fmt.Println(len(empty)) // 0
//
// Example - As identity element:
//
// data := []byte("hello")
// result1 := Monoid.Concat(Empty(), data) // []byte("hello")
// result2 := Monoid.Concat(data, Empty()) // []byte("hello")
//
// Example - Building byte slices:
//
// // Start with empty and build up
// buffer := Empty()
// buffer = Monoid.Concat(buffer, []byte("Hello"))
// buffer = Monoid.Concat(buffer, []byte(" "))
// buffer = Monoid.Concat(buffer, []byte("World"))
// // buffer: []byte("Hello World")
//
// See also:
// - Monoid.Empty(): Alternative way to get empty byte slice
// - ConcatAll(): For concatenating multiple byte slices
func Empty() []byte {
return Monoid.Empty()
}
// ToString converts a byte slice to a string.
//
// This function performs a direct conversion from []byte to string.
// The conversion creates a new string with a copy of the byte data.
//
// Parameters:
// - a: The byte slice to convert
//
// Returns:
// - A string containing the same data as the byte slice
//
// Performance Note:
//
// This conversion allocates a new string. For performance-critical code
// that needs to avoid allocations, consider using unsafe.String (Go 1.20+)
// or working directly with byte slices.
//
// Example - Basic conversion:
//
// bytes := []byte("hello")
// str := ToString(bytes)
// fmt.Println(str) // "hello"
//
// Example - Converting binary data:
//
// // ASCII codes for "Hello"
// data := []byte{0x48, 0x65, 0x6c, 0x6c, 0x6f}
// str := ToString(data)
// fmt.Println(str) // "Hello"
//
// Example - Empty byte slice:
//
// empty := Empty()
// str := ToString(empty)
// fmt.Println(str == "") // true
//
// Example - UTF-8 encoded text:
//
// utf8Bytes := []byte("Hello, 世界")
// str := ToString(utf8Bytes)
// fmt.Println(str) // "Hello, 世界"
//
// Example - Round-trip conversion:
//
// original := "test string"
// bytes := []byte(original)
// result := ToString(bytes)
// fmt.Println(original == result) // true
//
// See also:
// - []byte(string): For converting string to byte slice
// - Size(): For getting the length of a byte slice
func ToString(a []byte) string {
return string(a)
}
// Size returns the number of bytes in a byte slice.
//
// This function returns the length of the byte slice, which is the number
// of bytes it contains. This is equivalent to len(as) but provided as a
// named function for use in functional composition.
//
// Parameters:
// - as: The byte slice to measure
//
// Returns:
// - The number of bytes in the slice
//
// Example - Basic usage:
//
// data := []byte("hello")
// size := Size(data)
// fmt.Println(size) // 5
//
// Example - Empty slice:
//
// empty := Empty()
// size := Size(empty)
// fmt.Println(size) // 0
//
// Example - Binary data:
//
// binary := []byte{0x01, 0x02, 0x03, 0x04}
// size := Size(binary)
// fmt.Println(size) // 4
//
// Example - UTF-8 encoded text:
//
// // Note: Size returns byte count, not character count
// utf8 := []byte("Hello, 世界")
// byteCount := Size(utf8)
// fmt.Println(byteCount) // 13 (not 9 characters)
//
// Example - Using in functional composition:
//
// import "github.com/IBM/fp-go/v2/array"
//
// slices := [][]byte{
// []byte("a"),
// []byte("bb"),
// []byte("ccc"),
// }
//
// // Map to get sizes
// sizes := array.Map(Size)(slices)
// // sizes: []int{1, 2, 3}
//
// Example - Checking if slice is empty:
//
// data := []byte("test")
// isEmpty := Size(data) == 0
// fmt.Println(isEmpty) // false
//
// See also:
// - len(): Built-in function for getting slice length
// - ToString(): For converting byte slice to string
func Size(as []byte) int {
return len(as)
}

View File

@@ -187,6 +187,299 @@ func TestOrd(t *testing.T) {
})
}
// TestOrdProperties tests mathematical properties of Ord
func TestOrdProperties(t *testing.T) {
t.Run("reflexivity: x == x", func(t *testing.T) {
testCases := [][]byte{
[]byte{},
[]byte("a"),
[]byte("test"),
[]byte{0x01, 0x02, 0x03},
}
for _, tc := range testCases {
assert.Equal(t, 0, Ord.Compare(tc, tc),
"Compare(%v, %v) should be 0", tc, tc)
assert.True(t, Ord.Equals(tc, tc),
"Equals(%v, %v) should be true", tc, tc)
}
})
t.Run("antisymmetry: if x <= y and y <= x then x == y", func(t *testing.T) {
testCases := []struct {
a, b []byte
}{
{[]byte("abc"), []byte("abc")},
{[]byte{}, []byte{}},
{[]byte{0x01}, []byte{0x01}},
}
for _, tc := range testCases {
cmp1 := Ord.Compare(tc.a, tc.b)
cmp2 := Ord.Compare(tc.b, tc.a)
if cmp1 <= 0 && cmp2 <= 0 {
assert.True(t, Ord.Equals(tc.a, tc.b),
"If %v <= %v and %v <= %v, they should be equal", tc.a, tc.b, tc.b, tc.a)
}
}
})
t.Run("transitivity: if x <= y and y <= z then x <= z", func(t *testing.T) {
x := []byte("a")
y := []byte("b")
z := []byte("c")
cmpXY := Ord.Compare(x, y)
cmpYZ := Ord.Compare(y, z)
cmpXZ := Ord.Compare(x, z)
if cmpXY <= 0 && cmpYZ <= 0 {
assert.True(t, cmpXZ <= 0,
"If %v <= %v and %v <= %v, then %v <= %v", x, y, y, z, x, z)
}
})
t.Run("totality: either x <= y or y <= x", func(t *testing.T) {
testCases := []struct {
a, b []byte
}{
{[]byte("abc"), []byte("abd")},
{[]byte("xyz"), []byte("abc")},
{[]byte{}, []byte("a")},
{[]byte{0x01}, []byte{0x02}},
}
for _, tc := range testCases {
cmp1 := Ord.Compare(tc.a, tc.b)
cmp2 := Ord.Compare(tc.b, tc.a)
assert.True(t, cmp1 <= 0 || cmp2 <= 0,
"Either %v <= %v or %v <= %v must be true", tc.a, tc.b, tc.b, tc.a)
}
})
}
// TestEdgeCases tests edge cases and boundary conditions
func TestEdgeCases(t *testing.T) {
t.Run("very large byte slices", func(t *testing.T) {
large := make([]byte, 1000000)
for i := range large {
large[i] = byte(i % 256)
}
size := Size(large)
assert.Equal(t, 1000000, size)
str := ToString(large)
assert.Equal(t, 1000000, len(str))
})
t.Run("concatenating many slices", func(t *testing.T) {
slices := make([][]byte, 100)
for i := range slices {
slices[i] = []byte{byte(i)}
}
result := ConcatAll(slices...)
assert.Equal(t, 100, Size(result))
})
t.Run("null bytes in slice", func(t *testing.T) {
data := []byte{0x00, 0x01, 0x00, 0x02}
size := Size(data)
assert.Equal(t, 4, size)
str := ToString(data)
assert.Equal(t, 4, len(str))
})
t.Run("comparing slices with null bytes", func(t *testing.T) {
a := []byte{0x00, 0x01}
b := []byte{0x00, 0x02}
assert.Equal(t, -1, Ord.Compare(a, b))
})
}
// TestMonoidConcatPerformance tests concatenation performance characteristics
func TestMonoidConcatPerformance(t *testing.T) {
t.Run("ConcatAll vs repeated Concat", func(t *testing.T) {
slices := [][]byte{
[]byte("a"),
[]byte("b"),
[]byte("c"),
[]byte("d"),
[]byte("e"),
}
// Using ConcatAll
result1 := ConcatAll(slices...)
// Using repeated Concat
result2 := Monoid.Empty()
for _, s := range slices {
result2 = Monoid.Concat(result2, s)
}
assert.Equal(t, result1, result2)
assert.Equal(t, []byte("abcde"), result1)
})
}
// TestRoundTrip tests round-trip conversions
func TestRoundTrip(t *testing.T) {
t.Run("string to bytes to string", func(t *testing.T) {
original := "Hello, World! 世界"
bytes := []byte(original)
result := ToString(bytes)
assert.Equal(t, original, result)
})
t.Run("bytes to string to bytes", func(t *testing.T) {
original := []byte{0x48, 0x65, 0x6c, 0x6c, 0x6f}
str := ToString(original)
result := []byte(str)
assert.Equal(t, original, result)
})
}
// TestConcatAllVariadic tests ConcatAll with various argument counts
func TestConcatAllVariadic(t *testing.T) {
t.Run("zero arguments", func(t *testing.T) {
result := ConcatAll()
assert.Equal(t, []byte{}, result)
})
t.Run("one argument", func(t *testing.T) {
result := ConcatAll([]byte("test"))
assert.Equal(t, []byte("test"), result)
})
t.Run("two arguments", func(t *testing.T) {
result := ConcatAll([]byte("hello"), []byte("world"))
assert.Equal(t, []byte("helloworld"), result)
})
t.Run("many arguments", func(t *testing.T) {
result := ConcatAll(
[]byte("a"),
[]byte("b"),
[]byte("c"),
[]byte("d"),
[]byte("e"),
[]byte("f"),
[]byte("g"),
[]byte("h"),
[]byte("i"),
[]byte("j"),
)
assert.Equal(t, []byte("abcdefghij"), result)
})
}
// Benchmark tests
func BenchmarkToString(b *testing.B) {
data := []byte("Hello, World!")
b.Run("small", func(b *testing.B) {
for b.Loop() {
_ = ToString(data)
}
})
b.Run("large", func(b *testing.B) {
large := make([]byte, 10000)
for i := range large {
large[i] = byte(i % 256)
}
b.ResetTimer()
for b.Loop() {
_ = ToString(large)
}
})
}
func BenchmarkSize(b *testing.B) {
data := []byte("Hello, World!")
for b.Loop() {
_ = Size(data)
}
}
func BenchmarkMonoidConcat(b *testing.B) {
a := []byte("Hello")
c := []byte(" World")
b.Run("small slices", func(b *testing.B) {
for b.Loop() {
_ = Monoid.Concat(a, c)
}
})
b.Run("large slices", func(b *testing.B) {
large1 := make([]byte, 10000)
large2 := make([]byte, 10000)
b.ResetTimer()
for b.Loop() {
_ = Monoid.Concat(large1, large2)
}
})
}
func BenchmarkConcatAll(b *testing.B) {
slices := [][]byte{
[]byte("Hello"),
[]byte(" "),
[]byte("World"),
[]byte("!"),
}
b.Run("few slices", func(b *testing.B) {
for b.Loop() {
_ = ConcatAll(slices...)
}
})
b.Run("many slices", func(b *testing.B) {
many := make([][]byte, 100)
for i := range many {
many[i] = []byte{byte(i)}
}
b.ResetTimer()
for b.Loop() {
_ = ConcatAll(many...)
}
})
}
func BenchmarkOrdCompare(b *testing.B) {
a := []byte("abc")
c := []byte("abd")
b.Run("equal", func(b *testing.B) {
for b.Loop() {
_ = Ord.Compare(a, a)
}
})
b.Run("different", func(b *testing.B) {
for b.Loop() {
_ = Ord.Compare(a, c)
}
})
b.Run("large slices", func(b *testing.B) {
large1 := make([]byte, 10000)
large2 := make([]byte, 10000)
large2[9999] = 1
b.ResetTimer()
for b.Loop() {
_ = Ord.Compare(large1, large2)
}
})
}
// Example tests
func ExampleEmpty() {
empty := Empty()
@@ -219,3 +512,17 @@ func ExampleConcatAll() {
// Output:
}
func ExampleMonoid_concat() {
result := Monoid.Concat([]byte("Hello"), []byte(" World"))
println(string(result)) // Hello World
// Output:
}
func ExampleOrd_compare() {
cmp := Ord.Compare([]byte("abc"), []byte("abd"))
println(cmp) // -1 (abc < abd)
// Output:
}

4
v2/bytes/coverage.out Normal file
View File

@@ -0,0 +1,4 @@
mode: set
github.com/IBM/fp-go/v2/bytes/bytes.go:55.21,57.2 1 1
github.com/IBM/fp-go/v2/bytes/bytes.go:111.32,113.2 1 1
github.com/IBM/fp-go/v2/bytes/bytes.go:175.26,177.2 1 1

View File

@@ -23,12 +23,219 @@ import (
)
var (
// monoid for byte arrays
// Monoid is the Monoid instance for byte slices.
//
// This Monoid combines byte slices through concatenation, with an empty
// byte slice as the identity element. It satisfies the monoid laws:
//
// Identity laws:
// - Monoid.Concat(Monoid.Empty(), x) == x (left identity)
// - Monoid.Concat(x, Monoid.Empty()) == x (right identity)
//
// Associativity law:
// - Monoid.Concat(Monoid.Concat(a, b), c) == Monoid.Concat(a, Monoid.Concat(b, c))
//
// Operations:
// - Empty(): Returns an empty byte slice []byte{}
// - Concat(a, b []byte): Concatenates two byte slices
//
// Example - Basic concatenation:
//
// result := Monoid.Concat([]byte("Hello"), []byte(" World"))
// // result: []byte("Hello World")
//
// Example - Identity element:
//
// empty := Monoid.Empty()
// data := []byte("test")
// result1 := Monoid.Concat(empty, data) // []byte("test")
// result2 := Monoid.Concat(data, empty) // []byte("test")
//
// Example - Building byte buffers:
//
// buffer := Monoid.Empty()
// buffer = Monoid.Concat(buffer, []byte("Line 1\n"))
// buffer = Monoid.Concat(buffer, []byte("Line 2\n"))
// buffer = Monoid.Concat(buffer, []byte("Line 3\n"))
//
// Example - Associativity:
//
// a := []byte("a")
// b := []byte("b")
// c := []byte("c")
// left := Monoid.Concat(Monoid.Concat(a, b), c) // []byte("abc")
// right := Monoid.Concat(a, Monoid.Concat(b, c)) // []byte("abc")
// // left == right
//
// See also:
// - ConcatAll: For concatenating multiple byte slices at once
// - Empty(): Convenience function for getting empty byte slice
Monoid = A.Monoid[byte]()
// ConcatAll concatenates all bytes
// ConcatAll efficiently concatenates multiple byte slices into a single slice.
//
// This function takes a variadic number of byte slices and combines them
// into a single byte slice. It pre-allocates the exact amount of memory
// needed, making it more efficient than repeated concatenation.
//
// Parameters:
// - slices: Zero or more byte slices to concatenate
//
// Returns:
// - A new byte slice containing all input slices concatenated in order
//
// Performance:
//
// ConcatAll is more efficient than using Monoid.Concat repeatedly because
// it calculates the total size upfront and allocates memory once, avoiding
// multiple allocations and copies.
//
// Example - Basic usage:
//
// result := ConcatAll(
// []byte("Hello"),
// []byte(" "),
// []byte("World"),
// )
// // result: []byte("Hello World")
//
// Example - Empty input:
//
// result := ConcatAll()
// // result: []byte{}
//
// Example - Single slice:
//
// result := ConcatAll([]byte("test"))
// // result: []byte("test")
//
// Example - Building protocol messages:
//
// import "encoding/binary"
//
// header := []byte{0x01, 0x02}
// length := make([]byte, 4)
// binary.BigEndian.PutUint32(length, 100)
// payload := []byte("data")
// footer := []byte{0xFF}
//
// message := ConcatAll(header, length, payload, footer)
//
// Example - With empty slices:
//
// result := ConcatAll(
// []byte("a"),
// []byte{},
// []byte("b"),
// []byte{},
// []byte("c"),
// )
// // result: []byte("abc")
//
// Example - Building CSV line:
//
// fields := [][]byte{
// []byte("John"),
// []byte("Doe"),
// []byte("30"),
// }
// separator := []byte(",")
//
// // Interleave fields with separators
// parts := [][]byte{
// fields[0], separator,
// fields[1], separator,
// fields[2],
// }
// line := ConcatAll(parts...)
// // line: []byte("John,Doe,30")
//
// See also:
// - Monoid.Concat: For concatenating exactly two byte slices
// - bytes.Join: Standard library function for joining with separator
ConcatAll = A.ArrayConcatAll[byte]
// Ord implements the default ordering on bytes
// Ord is the Ord instance for byte slices providing lexicographic ordering.
//
// This Ord instance compares byte slices lexicographically (dictionary order),
// comparing bytes from left to right until a difference is found or one slice
// ends. It uses the standard library's bytes.Compare and bytes.Equal functions.
//
// Comparison rules:
// - Compares byte-by-byte from left to right
// - First differing byte determines the order
// - Shorter slice is less than longer slice if all bytes match
// - Empty slice is less than any non-empty slice
//
// Operations:
// - Compare(a, b []byte) int: Returns -1 if a < b, 0 if a == b, 1 if a > b
// - Equals(a, b []byte) bool: Returns true if slices are equal
//
// Example - Basic comparison:
//
// cmp := Ord.Compare([]byte("abc"), []byte("abd"))
// // cmp: -1 (abc < abd)
//
// cmp = Ord.Compare([]byte("xyz"), []byte("abc"))
// // cmp: 1 (xyz > abc)
//
// cmp = Ord.Compare([]byte("test"), []byte("test"))
// // cmp: 0 (equal)
//
// Example - Length differences:
//
// cmp := Ord.Compare([]byte("ab"), []byte("abc"))
// // cmp: -1 (shorter is less)
//
// cmp = Ord.Compare([]byte("abc"), []byte("ab"))
// // cmp: 1 (longer is greater)
//
// Example - Empty slices:
//
// cmp := Ord.Compare([]byte{}, []byte("a"))
// // cmp: -1 (empty is less)
//
// cmp = Ord.Compare([]byte{}, []byte{})
// // cmp: 0 (both empty)
//
// Example - Equality check:
//
// equal := Ord.Equals([]byte("test"), []byte("test"))
// // equal: true
//
// equal = Ord.Equals([]byte("test"), []byte("Test"))
// // equal: false (case-sensitive)
//
// Example - Sorting byte slices:
//
// import "github.com/IBM/fp-go/v2/array"
//
// data := [][]byte{
// []byte("zebra"),
// []byte("apple"),
// []byte("mango"),
// }
//
// sorted := array.Sort(Ord)(data)
// // sorted: [[]byte("apple"), []byte("mango"), []byte("zebra")]
//
// Example - Binary data comparison:
//
// cmp := Ord.Compare([]byte{0x01, 0x02}, []byte{0x01, 0x03})
// // cmp: -1 (0x02 < 0x03)
//
// Example - Finding minimum:
//
// import O "github.com/IBM/fp-go/v2/ord"
//
// a := []byte("xyz")
// b := []byte("abc")
// min := O.Min(Ord)(a, b)
// // min: []byte("abc")
//
// See also:
// - bytes.Compare: Standard library comparison function
// - bytes.Equal: Standard library equality function
// - array.Sort: For sorting slices using an Ord instance
Ord = O.MakeOrd(bytes.Compare, bytes.Equal)
)

View File

@@ -0,0 +1,623 @@
package circuitbreaker
import (
"time"
"github.com/IBM/fp-go/v2/either"
"github.com/IBM/fp-go/v2/function"
F "github.com/IBM/fp-go/v2/function"
"github.com/IBM/fp-go/v2/identity"
"github.com/IBM/fp-go/v2/io"
"github.com/IBM/fp-go/v2/ioref"
"github.com/IBM/fp-go/v2/lazy"
"github.com/IBM/fp-go/v2/optics/lens"
"github.com/IBM/fp-go/v2/option"
"github.com/IBM/fp-go/v2/pair"
"github.com/IBM/fp-go/v2/reader"
"github.com/IBM/fp-go/v2/retry"
)
var (
canaryRequestLens = lens.MakeLensWithName(
func(os openState) bool { return os.canaryRequest },
func(os openState, flag bool) openState {
os.canaryRequest = flag
return os
},
"openState.CanaryRequest",
)
retryStatusLens = lens.MakeLensWithName(
func(os openState) retry.RetryStatus { return os.retryStatus },
func(os openState, status retry.RetryStatus) openState {
os.retryStatus = status
return os
},
"openState.RetryStatus",
)
resetAtLens = lens.MakeLensWithName(
func(os openState) time.Time { return os.resetAt },
func(os openState, tm time.Time) openState {
os.resetAt = tm
return os
},
"openState.ResetAt",
)
openedAtLens = lens.MakeLensWithName(
func(os openState) time.Time { return os.openedAt },
func(os openState, tm time.Time) openState {
os.openedAt = tm
return os
},
"openState.OpenedAt",
)
createClosedCircuit = either.Right[openState, ClosedState]
createOpenCircuit = either.Left[ClosedState, openState]
// MakeClosedIORef creates an IORef containing a closed circuit breaker state.
// It wraps the provided ClosedState in a Right (closed) BreakerState and creates
// a mutable reference to it.
//
// Parameters:
// - closedState: The initial closed state configuration
//
// Returns:
// - An IO operation that creates an IORef[BreakerState] initialized to closed state
//
// Thread Safety: The returned IORef[BreakerState] is thread-safe. It uses atomic
// operations for all read/write/modify operations. The BreakerState itself is immutable.
MakeClosedIORef = F.Flow2(
createClosedCircuit,
ioref.MakeIORef,
)
// IsOpen checks if a BreakerState is in the open state.
// Returns true if the circuit breaker is open (blocking requests), false otherwise.
IsOpen = either.IsLeft[openState, ClosedState]
// IsClosed checks if a BreakerState is in the closed state.
// Returns true if the circuit breaker is closed (allowing requests), false otherwise.
IsClosed = either.IsRight[openState, ClosedState]
// modifyV creates a Reader that sequences an IORef modification operation.
// It takes an IORef[BreakerState] and returns a Reader that, when given an endomorphism
// (a function from BreakerState to BreakerState), produces an IO operation that modifies
// the IORef and returns the new state.
//
// This is used internally to create state modification operations that can be composed
// with other Reader-based operations in the circuit breaker logic.
//
// Thread Safety: The IORef modification is atomic. Multiple concurrent calls will be
// serialized by the IORef's atomic operations.
//
// Type signature: Reader[IORef[BreakerState], IO[Endomorphism[BreakerState]]]
modifyV = reader.Sequence(ioref.Modify[BreakerState])
initialRetry = retry.DefaultRetryStatus
// testCircuit sets the canaryRequest flag to true in an openState.
// This is used to mark that the circuit breaker is in half-open state,
// allowing a single test request (canary) to check if the service has recovered.
//
// When canaryRequest is true:
// - One request is allowed through to test the service
// - If the canary succeeds, the circuit closes
// - If the canary fails, the circuit remains open with an extended reset time
//
// Thread Safety: This is a pure function that returns a new openState; it does not
// modify its input. Safe for concurrent use.
//
// Type signature: Endomorphism[openState]
testCircuit = canaryRequestLens.Set(true)
)
// makeOpenCircuitFromPolicy creates a function that constructs an openState from a retry policy.
// This is a curried function that takes a retry policy and returns a function that takes a retry status
// and current time to produce an openState with calculated reset time.
//
// The function applies the retry policy to determine the next retry delay and calculates
// the resetAt time by adding the delay to the current time. If no previous delay exists
// (first failure), the resetAt is set to the current time.
//
// Parameters:
// - policy: The retry policy that determines backoff strategy (e.g., exponential backoff)
//
// Returns:
// - A curried function that takes:
// 1. rs (retry.RetryStatus): The current retry status containing retry count and previous delay
// 2. ct (time.Time): The current time when the circuit is opening
// And returns an openState with:
// - openedAt: Set to the current time (ct)
// - resetAt: Current time plus the delay from the retry policy
// - retryStatus: The updated retry status from applying the policy
// - canaryRequest: false (will be set to true when reset time is reached)
//
// Thread Safety: This is a pure function that creates new openState instances.
// Safe for concurrent use.
//
// Example:
//
// policy := retry.ExponentialBackoff(1*time.Second, 2.0, 10)
// makeOpen := makeOpenCircuitFromPolicy(policy)
// openState := makeOpen(retry.DefaultRetryStatus)(time.Now())
// // openState.resetAt will be approximately 1 second from now
func makeOpenCircuitFromPolicy(policy retry.RetryPolicy) func(rs retry.RetryStatus) func(ct time.Time) openState {
return func(rs retry.RetryStatus) func(ct time.Time) openState {
retryStatus := retry.ApplyPolicy(policy, rs)
return func(ct time.Time) openState {
resetTime := F.Pipe2(
retryStatus,
retry.PreviousDelayLens.Get,
option.Fold(
F.Pipe1(
ct,
lazy.Of,
),
ct.Add,
),
)
return openState{openedAt: ct, resetAt: resetTime, retryStatus: retryStatus}
}
}
}
// extendOpenCircuitFromMakeCircuit creates a function that extends the open state of a circuit breaker
// when a canary request fails. It takes a circuit maker function and returns a function that,
// given the current time, produces an endomorphism that updates an openState.
//
// This function is used when a canary request (test request in half-open state) fails.
// It extends the circuit breaker's open period by:
// 1. Extracting the current retry status from the open state
// 2. Using the makeCircuit function to calculate a new open state with updated retry status
// 3. Applying the current time to get the new state
// 4. Setting the canaryRequest flag to true to allow another test request later
//
// Parameters:
// - makeCircuit: A function that creates an openState from a retry status and current time.
// This is typically created by makeOpenCircuitFromPolicy.
//
// Returns:
// - A curried function that takes:
// 1. ct (time.Time): The current time when extending the circuit
// And returns an Endomorphism[openState] that:
// - Increments the retry count
// - Calculates a new resetAt time based on the retry policy (typically with exponential backoff)
// - Sets canaryRequest to true for the next test attempt
//
// Thread Safety: This is a pure function that returns new openState instances.
// Safe for concurrent use.
//
// Usage Context:
// - Called when a canary request fails in the half-open state
// - Extends the open period with increased backoff delay
// - Prepares the circuit for another canary attempt at the new resetAt time
func extendOpenCircuitFromMakeCircuit(
makeCircuit func(rs retry.RetryStatus) func(ct time.Time) openState,
) func(time.Time) Endomorphism[openState] {
return func(ct time.Time) Endomorphism[openState] {
return F.Flow4(
retryStatusLens.Get,
makeCircuit,
identity.Flap[openState](ct),
testCircuit,
)
}
}
// isResetTimeExceeded checks if the reset time for an open circuit has been exceeded.
// This is used to determine if the circuit breaker should transition from open to half-open state
// by allowing a canary request.
//
// The function returns an option.Kleisli that succeeds (returns Some) only when:
// 1. The circuit is not already in canary mode (canaryRequest is false)
// 2. The current time is after the resetAt time
//
// Parameters:
// - ct: The current time to compare against the reset time
//
// Returns:
// - An option.Kleisli[openState, openState] that:
// - Returns Some(openState) if the reset time has been exceeded and no canary is active
// - Returns None if the reset time has not been exceeded or a canary request is already active
//
// Thread Safety: This is a pure function that does not modify its input.
// Safe for concurrent use.
//
// Usage Context:
// - Called when the circuit is open to check if it's time to attempt a canary request
// - If this returns Some, the circuit transitions to half-open state (canary mode)
// - If this returns None, the circuit remains fully open and requests are blocked
func isResetTimeExceeded(ct time.Time) option.Kleisli[openState, openState] {
return option.FromPredicate(func(open openState) bool {
return !open.canaryRequest && ct.After(resetAtLens.Get(open))
})
}
// handleSuccessOnClosed handles a successful request when the circuit breaker is in closed state.
// It updates the closed state by recording the success and returns an IO operation that
// modifies the breaker state.
//
// This function is part of the circuit breaker's state management for the closed state.
// When a request succeeds in closed state:
// 1. The current time is obtained
// 2. The addSuccess function is called with the current time to update the ClosedState
// 3. The updated ClosedState is wrapped in a Right (closed) BreakerState
// 4. The breaker state is modified with the new state
//
// Parameters:
// - currentTime: An IO operation that provides the current time
// - addSuccess: A Reader that takes a time and returns an endomorphism for ClosedState,
// typically resetting failure counters or history
//
// Returns:
// - An io.Kleisli that takes another io.Kleisli and chains them together.
// The outer Kleisli takes an Endomorphism[BreakerState] and returns BreakerState.
// This allows composing the success handling with other state modifications.
//
// Thread Safety: This function creates IO operations that will atomically modify the
// IORef[BreakerState] when executed. The state modifications are thread-safe.
//
// Type signature:
//
// io.Kleisli[io.Kleisli[Endomorphism[BreakerState], BreakerState], BreakerState]
//
// Usage Context:
// - Called when a request succeeds while the circuit is closed
// - Resets failure tracking (counter or history) in the ClosedState
// - Keeps the circuit in closed state
func handleSuccessOnClosed(
currentTime IO[time.Time],
addSuccess Reader[time.Time, Endomorphism[ClosedState]],
) io.Kleisli[io.Kleisli[Endomorphism[BreakerState], BreakerState], BreakerState] {
return F.Flow2(
io.Chain,
identity.Flap[IO[BreakerState]](F.Pipe1(
currentTime,
io.Map(F.Flow2(
addSuccess,
either.Map[openState],
)))),
)
}
// handleFailureOnClosed handles a failed request when the circuit breaker is in closed state.
// It updates the closed state by recording the failure and checks if the circuit should open.
//
// This function is part of the circuit breaker's state management for the closed state.
// When a request fails in closed state:
// 1. The current time is obtained
// 2. The addError function is called to record the failure in the ClosedState
// 3. The checkClosedState function is called to determine if the failure threshold is exceeded
// 4. If the threshold is exceeded (Check returns None):
// - The circuit transitions to open state using openCircuit
// - A new openState is created with resetAt time calculated from the retry policy
// 5. If the threshold is not exceeded (Check returns Some):
// - The circuit remains closed with the updated failure tracking
//
// Parameters:
// - currentTime: An IO operation that provides the current time
// - addError: A Reader that takes a time and returns an endomorphism for ClosedState,
// recording a failure (incrementing counter or adding to history)
// - checkClosedState: A Reader that takes a time and returns an option.Kleisli that checks
// if the ClosedState should remain closed. Returns Some if circuit stays closed, None if it should open.
// - openCircuit: A Reader that takes a time and returns an openState with calculated resetAt time
//
// Returns:
// - An io.Kleisli that takes another io.Kleisli and chains them together.
// The outer Kleisli takes an Endomorphism[BreakerState] and returns BreakerState.
// This allows composing the failure handling with other state modifications.
//
// Thread Safety: This function creates IO operations that will atomically modify the
// IORef[BreakerState] when executed. The state modifications are thread-safe.
//
// Type signature:
//
// io.Kleisli[io.Kleisli[Endomorphism[BreakerState], BreakerState], BreakerState]
//
// State Transitions:
// - Closed -> Closed: When failure threshold is not exceeded (Some from checkClosedState)
// - Closed -> Open: When failure threshold is exceeded (None from checkClosedState)
//
// Usage Context:
// - Called when a request fails while the circuit is closed
// - Records the failure in the ClosedState (counter or history)
// - May trigger transition to open state if threshold is exceeded
func handleFailureOnClosed(
currentTime IO[time.Time],
addError Reader[time.Time, Endomorphism[ClosedState]],
checkClosedState Reader[time.Time, option.Kleisli[ClosedState, ClosedState]],
openCircuit Reader[time.Time, openState],
) io.Kleisli[io.Kleisli[Endomorphism[BreakerState], BreakerState], BreakerState] {
return F.Flow2(
io.Chain,
identity.Flap[IO[BreakerState]](F.Pipe1(
currentTime,
io.Map(func(ct time.Time) either.Operator[openState, ClosedState, ClosedState] {
return either.Chain(F.Flow3(
addError(ct),
checkClosedState(ct),
option.Fold(
F.Pipe2(
ct,
lazy.Of,
lazy.Map(F.Flow2(
openCircuit,
createOpenCircuit,
)),
),
createClosedCircuit,
),
))
}))),
)
}
// MakeCircuitBreaker creates a circuit breaker implementation for a higher-kinded type.
//
// This is a generic circuit breaker factory that works with any monad-like type (HKTT).
// It implements the circuit breaker pattern by wrapping operations and managing state transitions
// between closed, open, and half-open states based on failure rates and retry policies.
//
// Type Parameters:
// - E: The error type
// - T: The success value type
// - HKTT: The higher-kinded type representing the computation (e.g., IO[T], ReaderIO[R, T])
// - HKTOP: The higher-kinded type for operators (e.g., IO[func(HKTT) HKTT])
// - HKTHKTT: The nested higher-kinded type (e.g., IO[IO[T]])
//
// Parameters:
// - left: Constructs an error result in HKTT from an error value
// - chainFirstIOK: Chains an IO operation that runs after success, preserving the original value
// - chainFirstLeftIOK: Chains an IO operation that runs after error, preserving the original error
// - fromIO: Lifts an IO operation into HKTOP
// - flap: Applies a value to a function wrapped in a higher-kinded type
// - flatten: Flattens nested higher-kinded types (join operation)
// - currentTime: IO operation that provides the current time
// - closedState: The initial closed state configuration
// - makeError: Creates an error from a reset time when the circuit is open
// - checkError: Predicate to determine if an error should trigger circuit breaker logic
// - policy: Retry policy for determining reset times when circuit opens
// - logger: Logging function for circuit breaker events
//
// Thread Safety: The returned State monad creates operations that are thread-safe when
// executed. The IORef[BreakerState] uses atomic operations for all state modifications.
// Multiple concurrent requests will be properly serialized at the IORef level.
//
// Returns:
// - A State monad that transforms a pair of (IORef[BreakerState], HKTT) into HKTT,
// applying circuit breaker logic to the computation
func MakeCircuitBreaker[E, T, HKTT, HKTOP, HKTHKTT any](
left func(E) HKTT,
chainFirstIOK func(io.Kleisli[T, BreakerState]) func(HKTT) HKTT,
chainFirstLeftIOK func(io.Kleisli[E, BreakerState]) func(HKTT) HKTT,
fromIO func(IO[func(HKTT) HKTT]) HKTOP,
flap func(HKTT) func(HKTOP) HKTHKTT,
flatten func(HKTHKTT) HKTT,
currentTime IO[time.Time],
closedState ClosedState,
makeError Reader[time.Time, E],
checkError option.Kleisli[E, E],
policy retry.RetryPolicy,
metrics Metrics,
) State[Pair[IORef[BreakerState], HKTT], HKTT] {
type Operator = func(HKTT) HKTT
addSuccess := reader.From1(ClosedState.AddSuccess)
addError := reader.From1(ClosedState.AddError)
checkClosedState := reader.From1(ClosedState.Check)
closedCircuit := createClosedCircuit(closedState.Empty())
makeOpenCircuit := makeOpenCircuitFromPolicy(policy)
openCircuit := F.Pipe1(
initialRetry,
makeOpenCircuit,
)
extendOpenCircuit := extendOpenCircuitFromMakeCircuit(makeOpenCircuit)
failWithError := F.Flow4(
resetAtLens.Get,
makeError,
left,
reader.Of[HKTT],
)
handleSuccess := handleSuccessOnClosed(currentTime, addSuccess)
handleFailure := handleFailureOnClosed(currentTime, addError, checkClosedState, openCircuit)
onClosed := func(modify io.Kleisli[Endomorphism[BreakerState], BreakerState]) Operator {
return F.Flow2(
// error case
chainFirstLeftIOK(F.Flow3(
checkError,
option.Fold(
// the error is not applicable, handle as success
F.Pipe2(
modify,
handleSuccess,
lazy.Of,
),
// the error is relevant, record it
F.Pipe2(
modify,
handleFailure,
reader.Of[E],
),
),
// metering
io.ChainFirst(either.Fold(
F.Flow2(
openedAtLens.Get,
metrics.Open,
),
func(c ClosedState) IO[Void] {
return io.Of(function.VOID)
},
)),
)),
// good case
chainFirstIOK(F.Pipe2(
modify,
handleSuccess,
reader.Of[T],
)),
)
}
onCanary := func(modify io.Kleisli[Endomorphism[BreakerState], BreakerState]) Operator {
handleSuccess := F.Pipe2(
closedCircuit,
reader.Of[BreakerState],
modify,
)
return F.Flow2(
// the canary request fails
chainFirstLeftIOK(F.Flow2(
checkError,
option.Fold(
// the canary request succeeds, we close the circuit
F.Pipe1(
handleSuccess,
lazy.Of,
),
// the canary request fails, we extend the circuit
F.Pipe1(
F.Pipe1(
currentTime,
io.Chain(func(ct time.Time) IO[BreakerState] {
return F.Pipe1(
F.Flow2(
either.Fold(
extendOpenCircuit(ct),
F.Pipe1(
openCircuit(ct),
reader.Of[ClosedState],
),
),
createOpenCircuit,
),
modify,
)
}),
),
reader.Of[E],
),
),
)),
// the canary request succeeds, we'll close the circuit
chainFirstIOK(F.Pipe1(
handleSuccess,
reader.Of[T],
)),
)
}
onOpen := func(ref IORef[BreakerState]) Operator {
modify := modifyV(ref)
return F.Pipe3(
currentTime,
io.Chain(func(ct time.Time) IO[Operator] {
return F.Pipe1(
ref,
ioref.ModifyWithResult(either.Fold(
func(open openState) Pair[BreakerState, Operator] {
return option.Fold(
func() Pair[BreakerState, Operator] {
return pair.MakePair(createOpenCircuit(open), failWithError(open))
},
func(open openState) Pair[BreakerState, Operator] {
return pair.MakePair(createOpenCircuit(testCircuit(open)), onCanary(modify))
},
)(isResetTimeExceeded(ct)(open))
},
func(closed ClosedState) Pair[BreakerState, Operator] {
return pair.MakePair(createClosedCircuit(closed), onClosed(modify))
},
)),
)
}),
fromIO,
func(src HKTOP) Operator {
return func(rdr HKTT) HKTT {
return F.Pipe2(
src,
flap(rdr),
flatten,
)
}
},
)
}
return func(e Pair[IORef[BreakerState], HKTT]) Pair[Pair[IORef[BreakerState], HKTT], HKTT] {
return pair.MakePair(e, onOpen(pair.Head(e))(pair.Tail(e)))
}
}
// MakeSingletonBreaker creates a singleton circuit breaker operator for a higher-kinded type.
//
// This function creates a circuit breaker that maintains its own internal state reference.
// It's called "singleton" because it creates a single, self-contained circuit breaker instance
// with its own IORef for state management. The returned function can be used to wrap
// computations with circuit breaker protection.
//
// Type Parameters:
// - HKTT: The higher-kinded type representing the computation (e.g., IO[T], ReaderIO[R, T])
//
// Parameters:
// - cb: The circuit breaker State monad created by MakeCircuitBreaker
// - closedState: The initial closed state configuration for the circuit breaker
//
// Returns:
// - A function that wraps a computation (HKTT) with circuit breaker logic.
// The circuit breaker state is managed internally and persists across invocations.
//
// Thread Safety: The returned function is thread-safe. The internal IORef[BreakerState]
// uses atomic operations to manage state. Multiple concurrent calls to the returned function
// will be properly serialized at the state modification level.
//
// Example Usage:
//
// // Create a circuit breaker for IO operations
// breaker := MakeSingletonBreaker(
// MakeCircuitBreaker(...),
// MakeClosedStateCounter(3),
// )
//
// // Use it to wrap operations
// protectedOp := breaker(myIOOperation)
func MakeSingletonBreaker[HKTT any](
cb State[Pair[IORef[BreakerState], HKTT], HKTT],
closedState ClosedState,
) func(HKTT) HKTT {
return F.Flow3(
F.Pipe3(
closedState,
MakeClosedIORef,
io.Run,
pair.FromHead[HKTT],
),
cb,
pair.Tail,
)
}

View File

@@ -0,0 +1,579 @@
package circuitbreaker
import (
"sync"
"testing"
"time"
"github.com/IBM/fp-go/v2/function"
F "github.com/IBM/fp-go/v2/function"
"github.com/IBM/fp-go/v2/io"
"github.com/IBM/fp-go/v2/ioref"
"github.com/IBM/fp-go/v2/option"
"github.com/IBM/fp-go/v2/reader"
"github.com/IBM/fp-go/v2/retry"
"github.com/stretchr/testify/assert"
)
type testMetrics struct {
accepts int
rejects int
opens int
closes int
canary int
mu sync.Mutex
}
func (m *testMetrics) Accept(_ time.Time) IO[Void] {
return func() Void {
m.mu.Lock()
defer m.mu.Unlock()
m.accepts++
return function.VOID
}
}
func (m *testMetrics) Open(_ time.Time) IO[Void] {
return func() Void {
m.mu.Lock()
defer m.mu.Unlock()
m.opens++
return function.VOID
}
}
func (m *testMetrics) Close(_ time.Time) IO[Void] {
return func() Void {
m.mu.Lock()
defer m.mu.Unlock()
m.closes++
return function.VOID
}
}
func (m *testMetrics) Reject(_ time.Time) IO[Void] {
return func() Void {
m.mu.Lock()
defer m.mu.Unlock()
m.rejects++
return function.VOID
}
}
func (m *testMetrics) Canary(_ time.Time) IO[Void] {
return func() Void {
m.mu.Lock()
defer m.mu.Unlock()
m.canary++
return function.VOID
}
}
// VirtualTimer provides a controllable time source for testing
type VirtualTimer struct {
mu sync.Mutex
current time.Time
}
func NewMockMetrics() Metrics {
return &testMetrics{}
}
// NewVirtualTimer creates a new virtual timer starting at the given time
func NewVirtualTimer(start time.Time) *VirtualTimer {
return &VirtualTimer{current: start}
}
// Now returns the current virtual time
func (vt *VirtualTimer) Now() time.Time {
vt.mu.Lock()
defer vt.mu.Unlock()
return vt.current
}
// Advance moves the virtual time forward by the given duration
func (vt *VirtualTimer) Advance(d time.Duration) {
vt.mu.Lock()
defer vt.mu.Unlock()
vt.current = vt.current.Add(d)
}
// Set sets the virtual time to a specific value
func (vt *VirtualTimer) Set(t time.Time) {
vt.mu.Lock()
defer vt.mu.Unlock()
vt.current = t
}
// TestModifyV tests the modifyV variable
func TestModifyV(t *testing.T) {
t.Run("modifyV creates a Reader that modifies IORef", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
// Create initial state
initialState := createClosedCircuit(MakeClosedStateCounter(3))
ref := io.Run(ioref.MakeIORef(initialState))
// Create an endomorphism that opens the circuit
now := vt.Now()
openState := openState{
openedAt: now,
resetAt: now.Add(1 * time.Minute),
retryStatus: retry.DefaultRetryStatus,
canaryRequest: false,
}
endomorphism := func(bs BreakerState) BreakerState {
return createOpenCircuit(openState)
}
// Apply modifyV
modifyOp := modifyV(ref)
result := io.Run(modifyOp(endomorphism))
// Verify the state was modified
assert.True(t, IsOpen(result), "state should be open after modification")
})
t.Run("modifyV returns the new state", func(t *testing.T) {
initialState := createClosedCircuit(MakeClosedStateCounter(3))
ref := io.Run(ioref.MakeIORef(initialState))
// Create a simple endomorphism
endomorphism := F.Identity[BreakerState]
modifyOp := modifyV(ref)
result := io.Run(modifyOp(endomorphism))
assert.True(t, IsClosed(result), "state should remain closed")
})
}
// TestTestCircuit tests the testCircuit variable
func TestTestCircuit(t *testing.T) {
t.Run("testCircuit sets canaryRequest to true", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
now := vt.Now()
openState := openState{
openedAt: now,
resetAt: now.Add(1 * time.Minute),
retryStatus: retry.DefaultRetryStatus,
canaryRequest: false,
}
result := testCircuit(openState)
assert.True(t, result.canaryRequest, "canaryRequest should be set to true")
assert.Equal(t, openState.openedAt, result.openedAt, "openedAt should be unchanged")
assert.Equal(t, openState.resetAt, result.resetAt, "resetAt should be unchanged")
})
t.Run("testCircuit is idempotent", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
now := vt.Now()
openState := openState{
openedAt: now,
resetAt: now.Add(1 * time.Minute),
retryStatus: retry.DefaultRetryStatus,
canaryRequest: true, // already true
}
result := testCircuit(openState)
assert.True(t, result.canaryRequest, "canaryRequest should remain true")
})
t.Run("testCircuit preserves other fields", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
now := vt.Now()
resetTime := now.Add(2 * time.Minute)
retryStatus := retry.RetryStatus{
IterNumber: 5,
PreviousDelay: option.Some(30 * time.Second),
}
openState := openState{
openedAt: now,
resetAt: resetTime,
retryStatus: retryStatus,
canaryRequest: false,
}
result := testCircuit(openState)
assert.Equal(t, now, result.openedAt, "openedAt should be preserved")
assert.Equal(t, resetTime, result.resetAt, "resetAt should be preserved")
assert.Equal(t, retryStatus.IterNumber, result.retryStatus.IterNumber, "retryStatus should be preserved")
assert.True(t, result.canaryRequest, "canaryRequest should be set to true")
})
}
// TestMakeOpenCircuitFromPolicy tests the makeOpenCircuitFromPolicy function
func TestMakeOpenCircuitFromPolicy(t *testing.T) {
t.Run("creates openState with calculated reset time", func(t *testing.T) {
policy := retry.LimitRetries(5)
makeOpen := makeOpenCircuitFromPolicy(policy)
currentTime := time.Date(2026, 1, 9, 12, 0, 0, 0, time.UTC)
result := makeOpen(retry.DefaultRetryStatus)(currentTime)
assert.Equal(t, currentTime, result.openedAt, "openedAt should be current time")
assert.False(t, result.canaryRequest, "canaryRequest should be false initially")
assert.NotNil(t, result.retryStatus, "retryStatus should be set")
})
t.Run("applies retry policy to calculate delay", func(t *testing.T) {
// Use exponential backoff policy with limit and cap
policy := retry.Monoid.Concat(
retry.LimitRetries(10),
retry.CapDelay(10*time.Second, retry.ExponentialBackoff(1*time.Second)),
)
makeOpen := makeOpenCircuitFromPolicy(policy)
currentTime := time.Date(2026, 1, 9, 12, 0, 0, 0, time.UTC)
// First retry (iter 0)
result1 := makeOpen(retry.DefaultRetryStatus)(currentTime)
// The first delay should be approximately 1 second
expectedResetTime1 := currentTime.Add(1 * time.Second)
assert.WithinDuration(t, expectedResetTime1, result1.resetAt, 100*time.Millisecond,
"first reset time should be ~1 second from now")
// Second retry (iter 1) - should double
result2 := makeOpen(result1.retryStatus)(currentTime)
expectedResetTime2 := currentTime.Add(2 * time.Second)
assert.WithinDuration(t, expectedResetTime2, result2.resetAt, 100*time.Millisecond,
"second reset time should be ~2 seconds from now")
})
t.Run("handles first failure with no previous delay", func(t *testing.T) {
policy := retry.LimitRetries(3)
makeOpen := makeOpenCircuitFromPolicy(policy)
currentTime := time.Date(2026, 1, 9, 12, 0, 0, 0, time.UTC)
result := makeOpen(retry.DefaultRetryStatus)(currentTime)
// With no previous delay, resetAt should be current time
assert.Equal(t, currentTime, result.resetAt, "resetAt should be current time when no previous delay")
})
t.Run("increments retry iteration number", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
policy := retry.LimitRetries(10)
makeOpen := makeOpenCircuitFromPolicy(policy)
currentTime := vt.Now()
initialStatus := retry.DefaultRetryStatus
result := makeOpen(initialStatus)(currentTime)
assert.Greater(t, result.retryStatus.IterNumber, initialStatus.IterNumber,
"retry iteration should be incremented")
})
t.Run("curried function can be partially applied", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
policy := retry.LimitRetries(5)
makeOpen := makeOpenCircuitFromPolicy(policy)
// Partially apply with retry status
makeOpenWithStatus := makeOpen(retry.DefaultRetryStatus)
currentTime := vt.Now()
result := makeOpenWithStatus(currentTime)
assert.NotNil(t, result, "partially applied function should work")
assert.Equal(t, currentTime, result.openedAt)
})
}
// TestExtendOpenCircuitFromMakeCircuit tests the extendOpenCircuitFromMakeCircuit function
func TestExtendOpenCircuitFromMakeCircuit(t *testing.T) {
t.Run("extends open circuit with new retry status", func(t *testing.T) {
policy := retry.Monoid.Concat(
retry.LimitRetries(10),
retry.ExponentialBackoff(1*time.Second),
)
makeCircuit := makeOpenCircuitFromPolicy(policy)
extendCircuit := extendOpenCircuitFromMakeCircuit(makeCircuit)
currentTime := time.Date(2026, 1, 9, 12, 0, 0, 0, time.UTC)
// Create initial open state
initialOpen := openState{
openedAt: currentTime.Add(-1 * time.Minute),
resetAt: currentTime,
retryStatus: retry.DefaultRetryStatus,
canaryRequest: false,
}
// Extend the circuit
extendOp := extendCircuit(currentTime)
result := extendOp(initialOpen)
assert.True(t, result.canaryRequest, "canaryRequest should be set to true")
assert.Greater(t, result.retryStatus.IterNumber, initialOpen.retryStatus.IterNumber,
"retry iteration should be incremented")
assert.True(t, result.resetAt.After(currentTime), "resetAt should be in the future")
})
t.Run("sets canaryRequest to true for next test", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
policy := retry.LimitRetries(5)
makeCircuit := makeOpenCircuitFromPolicy(policy)
extendCircuit := extendOpenCircuitFromMakeCircuit(makeCircuit)
currentTime := vt.Now()
initialOpen := openState{
openedAt: currentTime.Add(-30 * time.Second),
resetAt: currentTime,
retryStatus: retry.DefaultRetryStatus,
canaryRequest: false,
}
result := extendCircuit(currentTime)(initialOpen)
assert.True(t, result.canaryRequest, "canaryRequest must be true after extension")
})
t.Run("applies exponential backoff on successive extensions", func(t *testing.T) {
policy := retry.Monoid.Concat(
retry.LimitRetries(10),
retry.ExponentialBackoff(1*time.Second),
)
makeCircuit := makeOpenCircuitFromPolicy(policy)
extendCircuit := extendOpenCircuitFromMakeCircuit(makeCircuit)
currentTime := time.Date(2026, 1, 9, 12, 0, 0, 0, time.UTC)
// First extension
state1 := openState{
openedAt: currentTime,
resetAt: currentTime,
retryStatus: retry.DefaultRetryStatus,
canaryRequest: false,
}
result1 := extendCircuit(currentTime)(state1)
delay1 := result1.resetAt.Sub(currentTime)
// Second extension (should have longer delay)
result2 := extendCircuit(currentTime)(result1)
delay2 := result2.resetAt.Sub(currentTime)
assert.Greater(t, delay2, delay1, "second extension should have longer delay due to exponential backoff")
})
}
// TestIsResetTimeExceeded tests the isResetTimeExceeded function
func TestIsResetTimeExceeded(t *testing.T) {
t.Run("returns Some when reset time is exceeded and no canary active", func(t *testing.T) {
currentTime := time.Date(2026, 1, 9, 12, 0, 0, 0, time.UTC)
resetTime := currentTime.Add(-1 * time.Second) // in the past
openState := openState{
openedAt: currentTime.Add(-1 * time.Minute),
resetAt: resetTime,
retryStatus: retry.DefaultRetryStatus,
canaryRequest: false,
}
result := isResetTimeExceeded(currentTime)(openState)
assert.True(t, option.IsSome(result), "should return Some when reset time exceeded")
})
t.Run("returns None when reset time not yet exceeded", func(t *testing.T) {
currentTime := time.Date(2026, 1, 9, 12, 0, 0, 0, time.UTC)
resetTime := currentTime.Add(1 * time.Minute) // in the future
openState := openState{
openedAt: currentTime.Add(-30 * time.Second),
resetAt: resetTime,
retryStatus: retry.DefaultRetryStatus,
canaryRequest: false,
}
result := isResetTimeExceeded(currentTime)(openState)
assert.True(t, option.IsNone(result), "should return None when reset time not exceeded")
})
t.Run("returns None when canary request is already active", func(t *testing.T) {
currentTime := time.Date(2026, 1, 9, 12, 0, 0, 0, time.UTC)
resetTime := currentTime.Add(-1 * time.Second) // in the past
openState := openState{
openedAt: currentTime.Add(-1 * time.Minute),
resetAt: resetTime,
retryStatus: retry.DefaultRetryStatus,
canaryRequest: true, // canary already active
}
result := isResetTimeExceeded(currentTime)(openState)
assert.True(t, option.IsNone(result), "should return None when canary is already active")
})
t.Run("returns Some at exact reset time boundary", func(t *testing.T) {
currentTime := time.Date(2026, 1, 9, 12, 0, 0, 0, time.UTC)
resetTime := currentTime.Add(-1 * time.Nanosecond) // just passed
openState := openState{
openedAt: currentTime.Add(-1 * time.Minute),
resetAt: resetTime,
retryStatus: retry.DefaultRetryStatus,
canaryRequest: false,
}
result := isResetTimeExceeded(currentTime)(openState)
assert.True(t, option.IsSome(result), "should return Some when current time is after reset time")
})
t.Run("returns None when current time equals reset time", func(t *testing.T) {
currentTime := time.Date(2026, 1, 9, 12, 0, 0, 0, time.UTC)
resetTime := currentTime // exactly equal
openState := openState{
openedAt: currentTime.Add(-1 * time.Minute),
resetAt: resetTime,
retryStatus: retry.DefaultRetryStatus,
canaryRequest: false,
}
result := isResetTimeExceeded(currentTime)(openState)
assert.True(t, option.IsNone(result), "should return None when times are equal (not After)")
})
}
// TestHandleSuccessOnClosed tests the handleSuccessOnClosed function
func TestHandleSuccessOnClosed(t *testing.T) {
t.Run("resets failure count on success", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
currentTime := vt.Now
addSuccess := reader.From1(ClosedState.AddSuccess)
// Create initial state with some failures
now := vt.Now()
initialClosed := MakeClosedStateCounter(3)
initialClosed = initialClosed.AddError(now)
initialClosed = initialClosed.AddError(now)
initialState := createClosedCircuit(initialClosed)
ref := io.Run(ioref.MakeIORef(initialState))
modify := modifyV(ref)
handler := handleSuccessOnClosed(currentTime, addSuccess)
// Apply the handler
result := io.Run(handler(modify))
// Verify state is still closed and failures are reset
assert.True(t, IsClosed(result), "circuit should remain closed after success")
})
t.Run("keeps circuit closed", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
currentTime := vt.Now
addSuccess := reader.From1(ClosedState.AddSuccess)
initialState := createClosedCircuit(MakeClosedStateCounter(3))
ref := io.Run(ioref.MakeIORef(initialState))
modify := modifyV(ref)
handler := handleSuccessOnClosed(currentTime, addSuccess)
result := io.Run(handler(modify))
assert.True(t, IsClosed(result), "circuit should remain closed")
})
}
// TestHandleFailureOnClosed tests the handleFailureOnClosed function
func TestHandleFailureOnClosed(t *testing.T) {
t.Run("keeps circuit closed when threshold not exceeded", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
currentTime := vt.Now
addError := reader.From1(ClosedState.AddError)
checkClosedState := reader.From1(ClosedState.Check)
openCircuit := func(ct time.Time) openState {
return openState{
openedAt: ct,
resetAt: ct.Add(1 * time.Minute),
retryStatus: retry.DefaultRetryStatus,
canaryRequest: false,
}
}
// Create initial state with room for more failures
now := vt.Now()
initialClosed := MakeClosedStateCounter(5) // threshold is 5
initialClosed = initialClosed.AddError(now)
initialState := createClosedCircuit(initialClosed)
ref := io.Run(ioref.MakeIORef(initialState))
modify := modifyV(ref)
handler := handleFailureOnClosed(currentTime, addError, checkClosedState, openCircuit)
result := io.Run(handler(modify))
assert.True(t, IsClosed(result), "circuit should remain closed when threshold not exceeded")
})
t.Run("opens circuit when threshold exceeded", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
currentTime := vt.Now
addError := reader.From1(ClosedState.AddError)
checkClosedState := reader.From1(ClosedState.Check)
openCircuit := func(ct time.Time) openState {
return openState{
openedAt: ct,
resetAt: ct.Add(1 * time.Minute),
retryStatus: retry.DefaultRetryStatus,
canaryRequest: false,
}
}
// Create initial state at threshold
now := vt.Now()
initialClosed := MakeClosedStateCounter(2) // threshold is 2
initialClosed = initialClosed.AddError(now)
initialState := createClosedCircuit(initialClosed)
ref := io.Run(ioref.MakeIORef(initialState))
modify := modifyV(ref)
handler := handleFailureOnClosed(currentTime, addError, checkClosedState, openCircuit)
result := io.Run(handler(modify))
assert.True(t, IsOpen(result), "circuit should open when threshold exceeded")
})
t.Run("records failure in closed state", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
currentTime := vt.Now
addError := reader.From1(ClosedState.AddError)
checkClosedState := reader.From1(ClosedState.Check)
openCircuit := func(ct time.Time) openState {
return openState{
openedAt: ct,
resetAt: ct.Add(1 * time.Minute),
retryStatus: retry.DefaultRetryStatus,
canaryRequest: false,
}
}
initialState := createClosedCircuit(MakeClosedStateCounter(10))
ref := io.Run(ioref.MakeIORef(initialState))
modify := modifyV(ref)
handler := handleFailureOnClosed(currentTime, addError, checkClosedState, openCircuit)
result := io.Run(handler(modify))
// Should still be closed but with failure recorded
assert.True(t, IsClosed(result), "circuit should remain closed")
})
}

329
v2/circuitbreaker/closed.go Normal file
View File

@@ -0,0 +1,329 @@
package circuitbreaker
import (
"slices"
"time"
A "github.com/IBM/fp-go/v2/array"
F "github.com/IBM/fp-go/v2/function"
N "github.com/IBM/fp-go/v2/number"
"github.com/IBM/fp-go/v2/optics/lens"
"github.com/IBM/fp-go/v2/option"
"github.com/IBM/fp-go/v2/ord"
)
type (
// ClosedState represents the closed state of a circuit breaker.
// In the closed state, requests are allowed to pass through, but failures are tracked.
// If a failure condition is met, the circuit breaker transitions to an open state.
//
// # Thread Safety
//
// All ClosedState implementations MUST be thread-safe. The recommended approach is to
// make all methods return new copies rather than modifying the receiver, which provides
// automatic thread safety through immutability.
//
// Implementations should ensure that:
// - Empty() returns a new instance with cleared state
// - AddError() returns a new instance with the error recorded
// - AddSuccess() returns a new instance with success recorded
// - Check() does not modify the receiver
//
// Both provided implementations (closedStateWithErrorCount and closedStateWithHistory)
// follow this pattern and are safe for concurrent use.
ClosedState interface {
// Empty returns a new ClosedState with all tracked failures cleared.
// This is used when transitioning back to a closed state from an open state.
//
// Thread Safety: Returns a new instance; safe for concurrent use.
Empty() ClosedState
// AddError records a failure at the given time.
// Returns an updated ClosedState reflecting the recorded failure.
//
// Thread Safety: Returns a new instance; safe for concurrent use.
// The original ClosedState is not modified.
AddError(time.Time) ClosedState
// AddSuccess records a successful request at the given time.
// Returns an updated ClosedState reflecting the successful request.
//
// Thread Safety: Returns a new instance; safe for concurrent use.
// The original ClosedState is not modified.
AddSuccess(time.Time) ClosedState
// Check verifies if the circuit breaker should remain closed at the given time.
// Returns Some(ClosedState) if the circuit should stay closed,
// or None if the circuit should open due to exceeding the failure threshold.
//
// Thread Safety: Does not modify the receiver; safe for concurrent use.
Check(time.Time) Option[ClosedState]
}
// closedStateWithErrorCount is a counter-based implementation of ClosedState.
// It tracks the number of consecutive failures and opens the circuit when
// the failure count exceeds a configured threshold.
//
// Thread Safety: This implementation is immutable. All methods return new instances
// rather than modifying the receiver, making it safe for concurrent use without locks.
closedStateWithErrorCount struct {
// checkFailures is a Kleisli arrow that checks if the failure count exceeds the threshold.
// Returns Some(count) if threshold is exceeded, None otherwise.
checkFailures option.Kleisli[uint, uint]
// failureCount tracks the current number of consecutive failures.
failureCount uint
}
// closedStateWithHistory is a time-window-based implementation of ClosedState.
// It tracks failures within a sliding time window and opens the circuit when
// the failure count within the window exceeds a configured threshold.
//
// Thread Safety: This implementation is immutable. All methods return new instances
// with new slices rather than modifying the receiver, making it safe for concurrent
// use without locks. The history slice is never modified in place; addToSlice always
// creates a new slice.
closedStateWithHistory struct {
ordTime Ord[time.Time]
// maxFailures is the maximum number of failures allowed within the time window.
checkFailures option.Kleisli[int, int]
timeWindow time.Duration
history []time.Time
}
)
var (
failureCountLens = lens.MakeLensStrictWithName(
func(s *closedStateWithErrorCount) uint { return s.failureCount },
func(s *closedStateWithErrorCount, c uint) *closedStateWithErrorCount {
s.failureCount = c
return s
},
"closeStateWithErrorCount.failureCount",
)
historyLens = lens.MakeLensRefWithName(
func(s *closedStateWithHistory) []time.Time { return s.history },
func(s *closedStateWithHistory, c []time.Time) *closedStateWithHistory {
s.history = c
return s
},
"closedStateWithHistory.history",
)
resetHistory = historyLens.Set(A.Empty[time.Time]())
resetFailureCount = failureCountLens.Set(0)
incFailureCount = lens.Modify[*closedStateWithErrorCount](N.Add(uint(1)))(failureCountLens)
)
// Empty returns a new closedStateWithErrorCount with the failure count reset to zero.
//
// Thread Safety: Returns a new instance; the original is not modified.
// Safe for concurrent use.
func (s *closedStateWithErrorCount) Empty() ClosedState {
return resetFailureCount(s)
}
// AddError increments the failure count and returns a new closedStateWithErrorCount.
// The time parameter is ignored in this counter-based implementation.
//
// Thread Safety: Returns a new instance; the original is not modified.
// Safe for concurrent use.
func (s *closedStateWithErrorCount) AddError(_ time.Time) ClosedState {
return incFailureCount(s)
}
// AddSuccess resets the failure count to zero and returns a new closedStateWithErrorCount.
// The time parameter is ignored in this counter-based implementation.
//
// Thread Safety: Returns a new instance; the original is not modified.
// Safe for concurrent use.
func (s *closedStateWithErrorCount) AddSuccess(_ time.Time) ClosedState {
return resetFailureCount(s)
}
// Check verifies if the failure count is below the threshold.
// Returns Some(ClosedState) if below threshold, None if at or above threshold.
// The time parameter is ignored in this counter-based implementation.
//
// Thread Safety: Does not modify the receiver; safe for concurrent use.
func (s *closedStateWithErrorCount) Check(_ time.Time) Option[ClosedState] {
return F.Pipe3(
s,
failureCountLens.Get,
s.checkFailures,
option.MapTo[uint](ClosedState(s)),
)
}
// MakeClosedStateCounter creates a counter-based ClosedState implementation.
// The circuit breaker will open when the number of consecutive failures reaches maxFailures.
//
// Parameters:
// - maxFailures: The threshold for consecutive failures. The circuit opens when
// failureCount >= maxFailures (greater than or equal to).
//
// Returns:
// - A ClosedState that tracks failures using a simple counter.
//
// Example:
// - If maxFailures is 3, the circuit will open on the 3rd consecutive failure.
// - Each AddError call increments the counter.
// - Each AddSuccess call resets the counter to 0 (only consecutive failures count).
// - Empty resets the counter to 0.
//
// Behavior:
// - Check returns Some(ClosedState) when failureCount < maxFailures (circuit stays closed)
// - Check returns None when failureCount >= maxFailures (circuit should open)
// - AddSuccess resets the failure count, so only consecutive failures trigger circuit opening
//
// Thread Safety: The returned ClosedState is safe for concurrent use. All methods
// return new instances rather than modifying the receiver.
func MakeClosedStateCounter(maxFailures uint) ClosedState {
return &closedStateWithErrorCount{
checkFailures: option.FromPredicate(N.LessThan(maxFailures)),
}
}
// Empty returns a new closedStateWithHistory with an empty failure history.
//
// Thread Safety: Returns a new instance with a new empty slice; the original is not modified.
// Safe for concurrent use.
func (s *closedStateWithHistory) Empty() ClosedState {
return resetHistory(s)
}
// addToSlice creates a new sorted slice by adding an item to an existing slice.
// This function does not modify the input slice; it creates a new slice with the item added
// and returns it in sorted order.
//
// Parameters:
// - o: An Ord instance for comparing time.Time values to determine sort order
// - ar: The existing slice of time.Time values (assumed to be sorted)
// - item: The new time.Time value to add to the slice
//
// Returns:
// - A new slice containing all elements from ar plus the new item, sorted in ascending order
//
// Implementation Details:
// - Creates a new slice with capacity len(ar)+1
// - Copies all elements from ar to the new slice
// - Appends the new item
// - Sorts the entire slice using the provided Ord comparator
//
// Thread Safety: This function is pure and does not modify its inputs. It always returns
// a new slice, making it safe for concurrent use. This is a key component of the immutable
// design of closedStateWithHistory.
//
// Note: This function is used internally by closedStateWithHistory.AddError to maintain
// a sorted history of failure timestamps for efficient binary search operations.
func addToSlice(o ord.Ord[time.Time], ar []time.Time, item time.Time) []time.Time {
cpy := make([]time.Time, len(ar)+1)
cpy[copy(cpy, ar)] = item
slices.SortFunc(cpy, o.Compare)
return cpy
}
// AddError records a failure at the given time and returns a new closedStateWithHistory.
// The new instance contains the failure in its history, with old failures outside the
// time window automatically pruned.
//
// Thread Safety: Returns a new instance with a new history slice; the original is not modified.
// Safe for concurrent use. The addToSlice function creates a new slice, ensuring immutability.
func (s *closedStateWithHistory) AddError(currentTime time.Time) ClosedState {
addFailureToHistory := F.Pipe1(
historyLens,
lens.Modify[*closedStateWithHistory](func(old []time.Time) []time.Time {
// oldest valid entry
idx, _ := slices.BinarySearchFunc(old, currentTime.Add(-s.timeWindow), s.ordTime.Compare)
return addToSlice(s.ordTime, old[idx:], currentTime)
}),
)
return addFailureToHistory(s)
}
// AddSuccess purges the entire failure history and returns a new closedStateWithHistory.
// The time parameter is ignored; any success clears all tracked failures.
//
// Thread Safety: Returns a new instance with a new empty slice; the original is not modified.
// Safe for concurrent use.
func (s *closedStateWithHistory) AddSuccess(_ time.Time) ClosedState {
return resetHistory(s)
}
// Check verifies if the number of failures in the history is below the threshold.
// Returns Some(ClosedState) if below threshold, None if at or above threshold.
// The time parameter is ignored; the check is based on the current history size.
//
// Thread Safety: Does not modify the receiver; safe for concurrent use.
func (s *closedStateWithHistory) Check(_ time.Time) Option[ClosedState] {
return F.Pipe4(
s,
historyLens.Get,
A.Size,
s.checkFailures,
option.MapTo[int](ClosedState(s)),
)
}
// MakeClosedStateHistory creates a time-window-based ClosedState implementation.
// The circuit breaker will open when the number of failures within a sliding time window reaches maxFailures.
//
// Unlike MakeClosedStateCounter which tracks consecutive failures, this implementation tracks
// all failures within a time window. However, any successful request will purge the entire history,
// effectively resetting the failure tracking.
//
// Parameters:
// - timeWindow: The duration of the sliding time window. Failures older than this are automatically
// discarded from the history when new failures are added.
// - maxFailures: The threshold for failures within the time window. The circuit opens when
// the number of failures in the window reaches this value (failureCount >= maxFailures).
//
// Returns:
// - A ClosedState that tracks failures using a time-based sliding window.
//
// Example:
// - If timeWindow is 1 minute and maxFailures is 5, the circuit will open when 5 failures
// occur within any 1-minute period.
// - Failures older than 1 minute are automatically removed from the history when AddError is called.
// - Any successful request immediately purges all tracked failures from the history.
//
// Behavior:
// - AddError records the failure timestamp and removes failures outside the time window
// (older than currentTime - timeWindow).
// - AddSuccess purges the entire failure history (all tracked failures are removed).
// - Check returns Some(ClosedState) when failureCount < maxFailures (circuit stays closed).
// - Check returns None when failureCount >= maxFailures (circuit should open).
// - Empty purges the entire failure history.
//
// Time Window Management:
// - The history is automatically pruned on each AddError call to remove failures older than
// currentTime - timeWindow.
// - The history is kept sorted by time for efficient binary search and pruning.
//
// Important Note:
// - A successful request resets everything by purging the entire history. This means that
// unlike a pure sliding window, a single success will clear all tracked failures, even
// those within the time window. This behavior is similar to MakeClosedStateCounter but
// with time-based tracking for failures.
//
// Thread Safety: The returned ClosedState is safe for concurrent use. All methods return
// new instances with new slices rather than modifying the receiver. The history slice is
// never modified in place.
//
// Use Cases:
// - Systems where a successful request indicates recovery and past failures should be forgotten.
// - Rate limiting with success-based reset: Allow bursts of failures but reset on success.
// - Hybrid approach: Time-based failure tracking with success-based recovery.
func MakeClosedStateHistory(
timeWindow time.Duration,
maxFailures uint) ClosedState {
return &closedStateWithHistory{
checkFailures: option.FromPredicate(N.LessThan(int(maxFailures))),
ordTime: ord.OrdTime(),
history: A.Empty[time.Time](),
timeWindow: timeWindow,
}
}

View File

@@ -0,0 +1,934 @@
package circuitbreaker
import (
"testing"
"time"
"github.com/IBM/fp-go/v2/option"
"github.com/IBM/fp-go/v2/ord"
"github.com/stretchr/testify/assert"
)
func TestMakeClosedStateCounter(t *testing.T) {
t.Run("creates a valid ClosedState", func(t *testing.T) {
maxFailures := uint(3)
state := MakeClosedStateCounter(maxFailures)
assert.NotNil(t, state, "MakeClosedStateCounter should return a non-nil ClosedState")
})
t.Run("initial state passes Check", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
maxFailures := uint(3)
state := MakeClosedStateCounter(maxFailures)
now := vt.Now()
result := state.Check(now)
assert.True(t, option.IsSome(result), "initial state should pass Check (return Some, circuit stays closed)")
})
t.Run("Empty resets failure count", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
maxFailures := uint(2)
state := MakeClosedStateCounter(maxFailures)
now := vt.Now()
// Add some errors
state = state.AddError(now)
state = state.AddError(now)
// Reset the state
state = state.Empty()
// Should pass check after reset
result := state.Check(now)
assert.True(t, option.IsSome(result), "state should pass Check after Empty")
})
t.Run("AddSuccess resets failure count", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
maxFailures := uint(3)
state := MakeClosedStateCounter(maxFailures)
// Add errors
state = state.AddError(vt.Now())
vt.Advance(1 * time.Second)
state = state.AddError(vt.Now())
vt.Advance(1 * time.Second)
// Add success (should reset counter)
state = state.AddSuccess(vt.Now())
vt.Advance(1 * time.Second)
// Add another error (this is now the first consecutive error)
state = state.AddError(vt.Now())
// Should still pass check (only 1 consecutive error, threshold is 3)
result := state.Check(vt.Now())
assert.True(t, option.IsSome(result), "AddSuccess should reset failure count")
})
t.Run("circuit opens when failures reach threshold", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
maxFailures := uint(3)
state := MakeClosedStateCounter(maxFailures)
now := vt.Now()
// Add errors up to but not including threshold
state = state.AddError(now)
state = state.AddError(now)
// Should still pass before threshold
result := state.Check(now)
assert.True(t, option.IsSome(result), "should pass Check before threshold")
// Add one more error to reach threshold
state = state.AddError(now)
// Should fail check at threshold
result = state.Check(now)
assert.True(t, option.IsNone(result), "should fail Check when reaching threshold")
})
t.Run("circuit opens exactly at maxFailures", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
maxFailures := uint(5)
state := MakeClosedStateCounter(maxFailures)
now := vt.Now()
// Add exactly maxFailures - 1 errors
for i := uint(0); i < maxFailures-1; i++ {
state = state.AddError(now)
}
// Should still pass
result := state.Check(now)
assert.True(t, option.IsSome(result), "should pass Check before maxFailures")
// Add one more to reach maxFailures
state = state.AddError(now)
// Should fail now
result = state.Check(now)
assert.True(t, option.IsNone(result), "should fail Check at maxFailures")
})
t.Run("zero maxFailures means circuit is always open", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
maxFailures := uint(0)
state := MakeClosedStateCounter(maxFailures)
now := vt.Now()
// Initial state should already fail (0 >= 0)
result := state.Check(now)
assert.True(t, option.IsNone(result), "initial state should fail Check with maxFailures=0")
// Add one error
state = state.AddError(now)
// Should still fail
result = state.Check(now)
assert.True(t, option.IsNone(result), "should fail Check after error with maxFailures=0")
})
t.Run("AddSuccess resets counter between errors", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
maxFailures := uint(3)
state := MakeClosedStateCounter(maxFailures)
// Add errors
state = state.AddError(vt.Now())
vt.Advance(1 * time.Second)
state = state.AddError(vt.Now())
vt.Advance(1 * time.Second)
// Add success (resets counter)
state = state.AddSuccess(vt.Now())
vt.Advance(1 * time.Second)
// Add more errors
state = state.AddError(vt.Now())
vt.Advance(1 * time.Second)
state = state.AddError(vt.Now())
// Should still pass (only 2 consecutive errors after reset)
result := state.Check(vt.Now())
assert.True(t, option.IsSome(result), "should pass with 2 consecutive errors after reset")
// Add one more to reach threshold
vt.Advance(1 * time.Second)
state = state.AddError(vt.Now())
// Should fail at threshold
result = state.Check(vt.Now())
assert.True(t, option.IsNone(result), "should fail after reaching threshold")
})
t.Run("Empty can be called multiple times", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
maxFailures := uint(2)
state := MakeClosedStateCounter(maxFailures)
now := vt.Now()
// Add errors
state = state.AddError(now)
state = state.AddError(now)
state = state.AddError(now)
// Reset multiple times
state = state.Empty()
state = state.Empty()
state = state.Empty()
// Should still pass
result := state.Check(now)
assert.True(t, option.IsSome(result), "state should pass Check after multiple Empty calls")
})
t.Run("time parameter is ignored in counter implementation", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
maxFailures := uint(3)
state := MakeClosedStateCounter(maxFailures)
// Use different times for each operation
time1 := vt.Now()
time2 := time1.Add(1 * time.Hour)
state = state.AddError(time1)
state = state.AddError(time2)
// Check with yet another time
time3 := time1.Add(2 * time.Hour)
result := state.Check(time3)
// Should still pass (2 errors, threshold is 3, not reached yet)
assert.True(t, option.IsSome(result), "time parameter should not affect counter behavior")
// Add one more to reach threshold
state = state.AddError(time1)
result = state.Check(time1)
assert.True(t, option.IsNone(result), "should fail after reaching threshold regardless of time")
})
t.Run("large maxFailures value", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
maxFailures := uint(1000)
state := MakeClosedStateCounter(maxFailures)
now := vt.Now()
// Add many errors but not reaching threshold
for i := uint(0); i < maxFailures-1; i++ {
state = state.AddError(now)
}
// Should still pass
result := state.Check(now)
assert.True(t, option.IsSome(result), "should pass Check with large maxFailures before threshold")
// Add one more to reach threshold
state = state.AddError(now)
// Should fail
result = state.Check(now)
assert.True(t, option.IsNone(result), "should fail Check with large maxFailures at threshold")
})
t.Run("state is immutable - original unchanged after AddError", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
maxFailures := uint(2)
originalState := MakeClosedStateCounter(maxFailures)
now := vt.Now()
// Create new state by adding error
newState := originalState.AddError(now)
// Original should still pass check
result := originalState.Check(now)
assert.True(t, option.IsSome(result), "original state should be unchanged")
// New state should reach threshold (2 errors total, threshold is 2)
newState = newState.AddError(now)
result = newState.Check(now)
assert.True(t, option.IsNone(result), "new state should fail after reaching threshold")
// Original should still pass
result = originalState.Check(now)
assert.True(t, option.IsSome(result), "original state should still be unchanged")
})
t.Run("state is immutable - original unchanged after Empty", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
maxFailures := uint(2)
state := MakeClosedStateCounter(maxFailures)
now := vt.Now()
// Add errors to original
state = state.AddError(now)
state = state.AddError(now)
stateWithErrors := state
// Create new state by calling Empty
emptyState := stateWithErrors.Empty()
// Original with errors should reach threshold (2 errors total, threshold is 2)
result := stateWithErrors.Check(now)
assert.True(t, option.IsNone(result), "state with errors should fail after reaching threshold")
// Empty state should pass
result = emptyState.Check(now)
assert.True(t, option.IsSome(result), "empty state should pass Check")
})
t.Run("AddSuccess prevents circuit from opening", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
maxFailures := uint(3)
state := MakeClosedStateCounter(maxFailures)
// Add errors close to threshold
state = state.AddError(vt.Now())
vt.Advance(1 * time.Second)
state = state.AddError(vt.Now())
vt.Advance(1 * time.Second)
// Add success before reaching threshold
state = state.AddSuccess(vt.Now())
vt.Advance(1 * time.Second)
// Add more errors
state = state.AddError(vt.Now())
vt.Advance(1 * time.Second)
state = state.AddError(vt.Now())
// Should still pass (only 2 consecutive errors)
result := state.Check(vt.Now())
assert.True(t, option.IsSome(result), "circuit should stay closed after success reset")
})
t.Run("multiple AddSuccess calls keep counter at zero", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
maxFailures := uint(2)
state := MakeClosedStateCounter(maxFailures)
// Add error
state = state.AddError(vt.Now())
vt.Advance(1 * time.Second)
// Multiple successes
state = state.AddSuccess(vt.Now())
vt.Advance(1 * time.Second)
state = state.AddSuccess(vt.Now())
vt.Advance(1 * time.Second)
state = state.AddSuccess(vt.Now())
vt.Advance(1 * time.Second)
// Should still pass
result := state.Check(vt.Now())
assert.True(t, option.IsSome(result), "multiple AddSuccess should keep counter at zero")
// Add errors to reach threshold
state = state.AddError(vt.Now())
vt.Advance(1 * time.Second)
state = state.AddError(vt.Now())
// Should fail
result = state.Check(vt.Now())
assert.True(t, option.IsNone(result), "should fail after reaching threshold")
})
t.Run("alternating errors and successes never opens circuit", func(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC))
maxFailures := uint(3)
state := MakeClosedStateCounter(maxFailures)
// Alternate errors and successes
for i := 0; i < 10; i++ {
state = state.AddError(vt.Now())
vt.Advance(500 * time.Millisecond)
state = state.AddSuccess(vt.Now())
vt.Advance(500 * time.Millisecond)
}
// Should still pass (never had consecutive failures)
result := state.Check(vt.Now())
assert.True(t, option.IsSome(result), "alternating errors and successes should never open circuit")
})
}
func TestAddToSlice(t *testing.T) {
ordTime := ord.OrdTime()
t.Run("adds item to empty slice and returns sorted result", func(t *testing.T) {
input := []time.Time{}
item := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
result := addToSlice(ordTime, input, item)
assert.Len(t, result, 1, "result should have 1 element")
assert.Equal(t, item, result[0], "result should contain the added item")
})
t.Run("adds item and maintains sorted order", func(t *testing.T) {
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
input := []time.Time{
baseTime,
baseTime.Add(20 * time.Second),
baseTime.Add(40 * time.Second),
}
item := baseTime.Add(30 * time.Second)
result := addToSlice(ordTime, input, item)
assert.Len(t, result, 4, "result should have 4 elements")
// Verify sorted order
assert.Equal(t, baseTime, result[0])
assert.Equal(t, baseTime.Add(20*time.Second), result[1])
assert.Equal(t, baseTime.Add(30*time.Second), result[2])
assert.Equal(t, baseTime.Add(40*time.Second), result[3])
})
t.Run("adds item at beginning when it's earliest", func(t *testing.T) {
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
input := []time.Time{
baseTime.Add(20 * time.Second),
baseTime.Add(40 * time.Second),
}
item := baseTime
result := addToSlice(ordTime, input, item)
assert.Len(t, result, 3, "result should have 3 elements")
assert.Equal(t, baseTime, result[0], "earliest item should be first")
assert.Equal(t, baseTime.Add(20*time.Second), result[1])
assert.Equal(t, baseTime.Add(40*time.Second), result[2])
})
t.Run("adds item at end when it's latest", func(t *testing.T) {
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
input := []time.Time{
baseTime,
baseTime.Add(20 * time.Second),
}
item := baseTime.Add(40 * time.Second)
result := addToSlice(ordTime, input, item)
assert.Len(t, result, 3, "result should have 3 elements")
assert.Equal(t, baseTime, result[0])
assert.Equal(t, baseTime.Add(20*time.Second), result[1])
assert.Equal(t, baseTime.Add(40*time.Second), result[2], "latest item should be last")
})
t.Run("does not modify original slice", func(t *testing.T) {
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
input := []time.Time{
baseTime,
baseTime.Add(20 * time.Second),
}
originalLen := len(input)
originalFirst := input[0]
originalLast := input[1]
item := baseTime.Add(10 * time.Second)
result := addToSlice(ordTime, input, item)
// Verify original slice is unchanged
assert.Len(t, input, originalLen, "original slice length should be unchanged")
assert.Equal(t, originalFirst, input[0], "original slice first element should be unchanged")
assert.Equal(t, originalLast, input[1], "original slice last element should be unchanged")
// Verify result is different and has correct length
assert.Len(t, result, 3, "result should have new length")
// Verify the result contains the new item in sorted order
assert.Equal(t, baseTime, result[0])
assert.Equal(t, baseTime.Add(10*time.Second), result[1])
assert.Equal(t, baseTime.Add(20*time.Second), result[2])
})
t.Run("handles duplicate timestamps", func(t *testing.T) {
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
input := []time.Time{
baseTime,
baseTime.Add(20 * time.Second),
}
item := baseTime // duplicate of first element
result := addToSlice(ordTime, input, item)
assert.Len(t, result, 3, "result should have 3 elements including duplicate")
// Both instances of baseTime should be present
count := 0
for _, t := range result {
if t.Equal(baseTime) {
count++
}
}
assert.Equal(t, 2, count, "should have 2 instances of the duplicate timestamp")
})
t.Run("maintains sort order with unsorted input", func(t *testing.T) {
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
// Input is intentionally unsorted
input := []time.Time{
baseTime.Add(40 * time.Second),
baseTime,
baseTime.Add(20 * time.Second),
}
item := baseTime.Add(30 * time.Second)
result := addToSlice(ordTime, input, item)
assert.Len(t, result, 4, "result should have 4 elements")
// Verify result is sorted regardless of input order
for i := 0; i < len(result)-1; i++ {
assert.True(t, result[i].Before(result[i+1]) || result[i].Equal(result[i+1]),
"result should be sorted: element %d (%v) should be <= element %d (%v)",
i, result[i], i+1, result[i+1])
}
})
t.Run("works with nanosecond precision", func(t *testing.T) {
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
input := []time.Time{
baseTime,
baseTime.Add(2 * time.Nanosecond),
}
item := baseTime.Add(1 * time.Nanosecond)
result := addToSlice(ordTime, input, item)
assert.Len(t, result, 3, "result should have 3 elements")
assert.Equal(t, baseTime, result[0])
assert.Equal(t, baseTime.Add(1*time.Nanosecond), result[1])
assert.Equal(t, baseTime.Add(2*time.Nanosecond), result[2])
})
}
func TestMakeClosedStateHistory(t *testing.T) {
t.Run("creates a valid ClosedState", func(t *testing.T) {
timeWindow := 1 * time.Minute
maxFailures := uint(3)
state := MakeClosedStateHistory(timeWindow, maxFailures)
assert.NotNil(t, state, "MakeClosedStateHistory should return a non-nil ClosedState")
})
t.Run("initial state passes Check", func(t *testing.T) {
timeWindow := 1 * time.Minute
maxFailures := uint(3)
state := MakeClosedStateHistory(timeWindow, maxFailures)
now := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
result := state.Check(now)
assert.True(t, option.IsSome(result), "initial state should pass Check (return Some, circuit stays closed)")
})
t.Run("Empty purges failure history", func(t *testing.T) {
timeWindow := 1 * time.Minute
maxFailures := uint(2)
state := MakeClosedStateHistory(timeWindow, maxFailures)
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
// Add some errors
state = state.AddError(baseTime)
state = state.AddError(baseTime.Add(10 * time.Second))
// Reset the state
state = state.Empty()
// Should pass check after reset
result := state.Check(baseTime.Add(20 * time.Second))
assert.True(t, option.IsSome(result), "state should pass Check after Empty")
})
t.Run("AddSuccess purges entire failure history", func(t *testing.T) {
timeWindow := 1 * time.Minute
maxFailures := uint(3)
state := MakeClosedStateHistory(timeWindow, maxFailures)
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
// Add errors
state = state.AddError(baseTime)
state = state.AddError(baseTime.Add(10 * time.Second))
// Add success (should purge all history)
state = state.AddSuccess(baseTime.Add(20 * time.Second))
// Add another error (this is now the first error in history)
state = state.AddError(baseTime.Add(30 * time.Second))
// Should still pass check (only 1 error in history, threshold is 3)
result := state.Check(baseTime.Add(30 * time.Second))
assert.True(t, option.IsSome(result), "AddSuccess should purge entire failure history")
})
t.Run("circuit opens when failures reach threshold within time window", func(t *testing.T) {
timeWindow := 1 * time.Minute
maxFailures := uint(3)
state := MakeClosedStateHistory(timeWindow, maxFailures)
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
// Add errors within time window but not reaching threshold
state = state.AddError(baseTime)
state = state.AddError(baseTime.Add(10 * time.Second))
// Should still pass before threshold
result := state.Check(baseTime.Add(20 * time.Second))
assert.True(t, option.IsSome(result), "should pass Check before threshold")
// Add one more error to reach threshold
state = state.AddError(baseTime.Add(30 * time.Second))
// Should fail check at threshold
result = state.Check(baseTime.Add(30 * time.Second))
assert.True(t, option.IsNone(result), "should fail Check when reaching threshold")
})
t.Run("old failures outside time window are automatically removed", func(t *testing.T) {
timeWindow := 1 * time.Minute
maxFailures := uint(3)
state := MakeClosedStateHistory(timeWindow, maxFailures)
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
// Add errors that will be outside the time window
state = state.AddError(baseTime)
state = state.AddError(baseTime.Add(10 * time.Second))
// Add error after time window has passed (this should remove old errors)
state = state.AddError(baseTime.Add(2 * time.Minute))
// Should pass check (only 1 error in window, old ones removed)
result := state.Check(baseTime.Add(2 * time.Minute))
assert.True(t, option.IsSome(result), "old failures should be removed from history")
})
t.Run("failures within time window are retained", func(t *testing.T) {
timeWindow := 1 * time.Minute
maxFailures := uint(3)
state := MakeClosedStateHistory(timeWindow, maxFailures)
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
// Add errors within time window
state = state.AddError(baseTime)
state = state.AddError(baseTime.Add(30 * time.Second))
state = state.AddError(baseTime.Add(50 * time.Second))
// All errors are within 1 minute window, should fail check
result := state.Check(baseTime.Add(50 * time.Second))
assert.True(t, option.IsNone(result), "failures within time window should be retained")
})
t.Run("sliding window behavior - errors slide out over time", func(t *testing.T) {
timeWindow := 1 * time.Minute
maxFailures := uint(3)
state := MakeClosedStateHistory(timeWindow, maxFailures)
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
// Add 3 errors to reach threshold
state = state.AddError(baseTime)
state = state.AddError(baseTime.Add(10 * time.Second))
state = state.AddError(baseTime.Add(20 * time.Second))
// Circuit should be open
result := state.Check(baseTime.Add(20 * time.Second))
assert.True(t, option.IsNone(result), "circuit should be open with 3 failures")
// Add error after first failure has expired (> 1 minute from first error)
// This should remove the first error, leaving only 3 in window
state = state.AddError(baseTime.Add(70 * time.Second))
// Should still fail check (3 errors in window after pruning)
result = state.Check(baseTime.Add(70 * time.Second))
assert.True(t, option.IsNone(result), "circuit should remain open with 3 failures in window")
})
t.Run("zero maxFailures means circuit is always open", func(t *testing.T) {
timeWindow := 1 * time.Minute
maxFailures := uint(0)
state := MakeClosedStateHistory(timeWindow, maxFailures)
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
// Initial state should already fail (0 >= 0)
result := state.Check(baseTime)
assert.True(t, option.IsNone(result), "initial state should fail Check with maxFailures=0")
// Add one error
state = state.AddError(baseTime)
// Should still fail
result = state.Check(baseTime)
assert.True(t, option.IsNone(result), "should fail Check after error with maxFailures=0")
})
t.Run("success purges history even with failures in time window", func(t *testing.T) {
timeWindow := 1 * time.Minute
maxFailures := uint(3)
state := MakeClosedStateHistory(timeWindow, maxFailures)
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
// Add errors within time window
state = state.AddError(baseTime)
state = state.AddError(baseTime.Add(10 * time.Second))
// Add success (purges all history)
state = state.AddSuccess(baseTime.Add(20 * time.Second))
// Add more errors
state = state.AddError(baseTime.Add(30 * time.Second))
state = state.AddError(baseTime.Add(40 * time.Second))
// Should still pass (only 2 errors after purge)
result := state.Check(baseTime.Add(40 * time.Second))
assert.True(t, option.IsSome(result), "success should purge all history")
// Add one more to reach threshold
state = state.AddError(baseTime.Add(50 * time.Second))
// Should fail at threshold
result = state.Check(baseTime.Add(50 * time.Second))
assert.True(t, option.IsNone(result), "should fail after reaching threshold")
})
t.Run("multiple successes keep history empty", func(t *testing.T) {
timeWindow := 1 * time.Minute
maxFailures := uint(2)
state := MakeClosedStateHistory(timeWindow, maxFailures)
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
// Add error
state = state.AddError(baseTime)
// Multiple successes
state = state.AddSuccess(baseTime.Add(10 * time.Second))
state = state.AddSuccess(baseTime.Add(20 * time.Second))
state = state.AddSuccess(baseTime.Add(30 * time.Second))
// Should still pass
result := state.Check(baseTime.Add(30 * time.Second))
assert.True(t, option.IsSome(result), "multiple AddSuccess should keep history empty")
// Add errors to reach threshold
state = state.AddError(baseTime.Add(40 * time.Second))
state = state.AddError(baseTime.Add(50 * time.Second))
// Should fail
result = state.Check(baseTime.Add(50 * time.Second))
assert.True(t, option.IsNone(result), "should fail after reaching threshold")
})
t.Run("state is immutable - original unchanged after AddError", func(t *testing.T) {
timeWindow := 1 * time.Minute
maxFailures := uint(2)
originalState := MakeClosedStateHistory(timeWindow, maxFailures)
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
// Create new state by adding error
newState := originalState.AddError(baseTime)
// Original should still pass check
result := originalState.Check(baseTime)
assert.True(t, option.IsSome(result), "original state should be unchanged")
// New state should reach threshold after another error
newState = newState.AddError(baseTime.Add(10 * time.Second))
result = newState.Check(baseTime.Add(10 * time.Second))
assert.True(t, option.IsNone(result), "new state should fail after reaching threshold")
// Original should still pass
result = originalState.Check(baseTime)
assert.True(t, option.IsSome(result), "original state should still be unchanged")
})
t.Run("state is immutable - original unchanged after Empty", func(t *testing.T) {
timeWindow := 1 * time.Minute
maxFailures := uint(2)
state := MakeClosedStateHistory(timeWindow, maxFailures)
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
// Add errors to original
state = state.AddError(baseTime)
state = state.AddError(baseTime.Add(10 * time.Second))
stateWithErrors := state
// Create new state by calling Empty
emptyState := stateWithErrors.Empty()
// Original with errors should fail check
result := stateWithErrors.Check(baseTime.Add(10 * time.Second))
assert.True(t, option.IsNone(result), "state with errors should fail after reaching threshold")
// Empty state should pass
result = emptyState.Check(baseTime.Add(10 * time.Second))
assert.True(t, option.IsSome(result), "empty state should pass Check")
})
t.Run("exact time window boundary behavior", func(t *testing.T) {
timeWindow := 1 * time.Minute
maxFailures := uint(3)
state := MakeClosedStateHistory(timeWindow, maxFailures)
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
// Add error at baseTime
state = state.AddError(baseTime)
// Add error exactly at time window boundary
state = state.AddError(baseTime.Add(1 * time.Minute))
// The first error should be removed (it's now outside the window)
// Only 1 error should remain
result := state.Check(baseTime.Add(1 * time.Minute))
assert.True(t, option.IsSome(result), "error at exact window boundary should remove older errors")
})
t.Run("multiple errors at same timestamp", func(t *testing.T) {
timeWindow := 1 * time.Minute
maxFailures := uint(3)
state := MakeClosedStateHistory(timeWindow, maxFailures)
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
// Add multiple errors at same time
state = state.AddError(baseTime)
state = state.AddError(baseTime)
state = state.AddError(baseTime)
// Should fail check (3 errors at same time)
result := state.Check(baseTime)
assert.True(t, option.IsNone(result), "multiple errors at same timestamp should count separately")
})
t.Run("errors added out of chronological order are sorted", func(t *testing.T) {
timeWindow := 1 * time.Minute
maxFailures := uint(4)
state := MakeClosedStateHistory(timeWindow, maxFailures)
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
// Add errors out of order
state = state.AddError(baseTime.Add(30 * time.Second))
state = state.AddError(baseTime.Add(5 * time.Second))
state = state.AddError(baseTime.Add(50 * time.Second))
// Add error that should trigger pruning
state = state.AddError(baseTime.Add(70 * time.Second))
// The error at 5s should be removed (> 1 minute from 70s: 70-5=65 > 60)
// Should have 3 errors remaining (30s, 50s, 70s)
result := state.Check(baseTime.Add(70 * time.Second))
assert.True(t, option.IsSome(result), "errors should be sorted and pruned correctly")
})
t.Run("large time window with many failures", func(t *testing.T) {
timeWindow := 24 * time.Hour
maxFailures := uint(100)
state := MakeClosedStateHistory(timeWindow, maxFailures)
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
// Add many failures within the window
for i := 0; i < 99; i++ {
state = state.AddError(baseTime.Add(time.Duration(i) * time.Minute))
}
// Should still pass (99 < 100)
result := state.Check(baseTime.Add(99 * time.Minute))
assert.True(t, option.IsSome(result), "should pass with 99 failures when threshold is 100")
// Add one more to reach threshold
state = state.AddError(baseTime.Add(100 * time.Minute))
// Should fail
result = state.Check(baseTime.Add(100 * time.Minute))
assert.True(t, option.IsNone(result), "should fail at threshold with large window")
})
t.Run("very short time window", func(t *testing.T) {
timeWindow := 100 * time.Millisecond
maxFailures := uint(3)
state := MakeClosedStateHistory(timeWindow, maxFailures)
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
// Add errors within short window
state = state.AddError(baseTime)
state = state.AddError(baseTime.Add(50 * time.Millisecond))
state = state.AddError(baseTime.Add(90 * time.Millisecond))
// Should fail (3 errors within 100ms)
result := state.Check(baseTime.Add(90 * time.Millisecond))
assert.True(t, option.IsNone(result), "should fail with errors in short time window")
// Add error after window expires
state = state.AddError(baseTime.Add(200 * time.Millisecond))
// Should pass (old errors removed, only 1 in window)
result = state.Check(baseTime.Add(200 * time.Millisecond))
assert.True(t, option.IsSome(result), "should pass after short window expires")
})
t.Run("success prevents circuit from opening", func(t *testing.T) {
timeWindow := 1 * time.Minute
maxFailures := uint(3)
state := MakeClosedStateHistory(timeWindow, maxFailures)
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
// Add errors close to threshold
state = state.AddError(baseTime)
state = state.AddError(baseTime.Add(10 * time.Second))
// Add success before reaching threshold
state = state.AddSuccess(baseTime.Add(20 * time.Second))
// Add more errors
state = state.AddError(baseTime.Add(30 * time.Second))
state = state.AddError(baseTime.Add(40 * time.Second))
// Should still pass (only 2 errors after success purge)
result := state.Check(baseTime.Add(40 * time.Second))
assert.True(t, option.IsSome(result), "circuit should stay closed after success purge")
})
t.Run("Empty can be called multiple times", func(t *testing.T) {
timeWindow := 1 * time.Minute
maxFailures := uint(2)
state := MakeClosedStateHistory(timeWindow, maxFailures)
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
// Add errors
state = state.AddError(baseTime)
state = state.AddError(baseTime.Add(10 * time.Second))
state = state.AddError(baseTime.Add(20 * time.Second))
// Reset multiple times
state = state.Empty()
state = state.Empty()
state = state.Empty()
// Should still pass
result := state.Check(baseTime.Add(30 * time.Second))
assert.True(t, option.IsSome(result), "state should pass Check after multiple Empty calls")
})
t.Run("gradual failure accumulation within window", func(t *testing.T) {
timeWindow := 1 * time.Minute
maxFailures := uint(5)
state := MakeClosedStateHistory(timeWindow, maxFailures)
baseTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
// Add failures gradually
state = state.AddError(baseTime)
state = state.AddError(baseTime.Add(15 * time.Second))
state = state.AddError(baseTime.Add(30 * time.Second))
state = state.AddError(baseTime.Add(45 * time.Second))
// Should still pass (4 < 5)
result := state.Check(baseTime.Add(45 * time.Second))
assert.True(t, option.IsSome(result), "should pass before threshold")
// Add one more within window
state = state.AddError(baseTime.Add(55 * time.Second))
// Should fail (5 >= 5)
result = state.Check(baseTime.Add(55 * time.Second))
assert.True(t, option.IsNone(result), "should fail at threshold")
})
}

335
v2/circuitbreaker/error.go Normal file
View File

@@ -0,0 +1,335 @@
// Package circuitbreaker provides error types and utilities for circuit breaker implementations.
package circuitbreaker
import (
"crypto/x509"
"errors"
"fmt"
"net"
"net/http"
"net/url"
"syscall"
"time"
E "github.com/IBM/fp-go/v2/errors"
FH "github.com/IBM/fp-go/v2/http"
"github.com/IBM/fp-go/v2/option"
)
// CircuitBreakerError represents an error that occurs when a circuit breaker is in the open state.
//
// When a circuit breaker opens due to too many failures, it prevents further operations
// from executing until a reset time is reached. This error type communicates that state
// and provides information about when the circuit breaker will attempt to close again.
//
// Fields:
// - Name: The name identifying this circuit breaker instance
// - ResetAt: The time at which the circuit breaker will transition from open to half-open state
//
// Thread Safety: This type is immutable and safe for concurrent use.
type CircuitBreakerError struct {
Name string
ResetAt time.Time
}
// Error implements the error interface for CircuitBreakerError.
//
// Returns a formatted error message indicating that the circuit breaker is open
// and when it will attempt to close.
//
// Returns:
// - A string describing the circuit breaker state and reset time
//
// Thread Safety: This method is safe for concurrent use as it only reads immutable fields.
//
// Example:
//
// err := &CircuitBreakerError{Name: "API", ResetAt: time.Now().Add(30 * time.Second)}
// fmt.Println(err.Error())
// // Output: circuit breaker is open [API], will close at 2026-01-09 12:20:47.123 +0100 CET
func (e *CircuitBreakerError) Error() string {
return fmt.Sprintf("circuit breaker is open [%s], will close at %s", e.Name, e.ResetAt)
}
// MakeCircuitBreakerErrorWithName creates a circuit breaker error constructor with a custom name.
//
// This function returns a constructor that creates CircuitBreakerError instances with a specific
// circuit breaker name. This is useful when you have multiple circuit breakers in your system
// and want to identify which one is open in error messages.
//
// Parameters:
// - name: The name to identify this circuit breaker in error messages
//
// Returns:
// - A function that takes a reset time and returns a CircuitBreakerError with the specified name
//
// Thread Safety: The returned function is safe for concurrent use as it creates new error
// instances on each call.
//
// Example:
//
// makeDBError := MakeCircuitBreakerErrorWithName("Database Circuit Breaker")
// err := makeDBError(time.Now().Add(30 * time.Second))
// fmt.Println(err.Error())
// // Output: circuit breaker is open [Database Circuit Breaker], will close at 2026-01-09 12:20:47.123 +0100 CET
func MakeCircuitBreakerErrorWithName(name string) func(time.Time) error {
return func(resetTime time.Time) error {
return &CircuitBreakerError{Name: name, ResetAt: resetTime}
}
}
// MakeCircuitBreakerError creates a new CircuitBreakerError with the specified reset time.
//
// This constructor function creates a circuit breaker error that indicates when the
// circuit breaker will transition from the open state to the half-open state, allowing
// test requests to determine if the underlying service has recovered.
//
// Parameters:
// - resetTime: The time at which the circuit breaker will attempt to close
//
// Returns:
// - An error representing the circuit breaker open state
//
// Thread Safety: This function is safe for concurrent use as it creates new error
// instances on each call.
//
// Example:
//
// resetTime := time.Now().Add(30 * time.Second)
// err := MakeCircuitBreakerError(resetTime)
// if cbErr, ok := err.(*CircuitBreakerError); ok {
// fmt.Printf("Circuit breaker will reset at: %s\n", cbErr.ResetAt)
// }
var MakeCircuitBreakerError = MakeCircuitBreakerErrorWithName("Generic Circuit Breaker")
// AnyError converts an error to an Option, wrapping non-nil errors in Some and nil errors in None.
//
// This variable provides a functional way to handle errors by converting them to Option types.
// It's particularly useful in functional programming contexts where you want to treat errors
// as optional values rather than using traditional error handling patterns.
//
// Behavior:
// - If the error is non-nil, returns Some(error)
// - If the error is nil, returns None
//
// Thread Safety: This function is pure and safe for concurrent use.
//
// Example:
//
// err := errors.New("something went wrong")
// optErr := AnyError(err) // Some(error)
//
// var noErr error = nil
// optNoErr := AnyError(noErr) // None
//
// // Using in functional pipelines
// result := F.Pipe2(
// someOperation(),
// AnyError,
// O.Map(func(e error) string { return e.Error() }),
// )
var AnyError = option.FromPredicate(E.IsNonNil)
// shouldOpenCircuit determines if an error should cause a circuit breaker to open.
//
// This function checks if an error represents an infrastructure or server problem
// that indicates the service is unhealthy and should trigger circuit breaker protection.
// It examines both the error type and, for HTTP errors, the status code.
//
// Errors that should open the circuit include:
// - HTTP 5xx server errors (500-599) indicating server-side problems
// - Network errors (connection refused, connection reset, timeouts)
// - DNS resolution errors
// - TLS/certificate errors
// - Other infrastructure-related errors
//
// Errors that should NOT open the circuit include:
// - HTTP 4xx client errors (bad request, unauthorized, not found, etc.)
// - Application-level validation errors
// - Business logic errors
//
// The function unwraps error chains to find the root cause, making it compatible
// with wrapped errors created by fmt.Errorf with %w or errors.Join.
//
// Parameters:
// - err: The error to evaluate (may be nil)
//
// Returns:
// - true if the error should cause the circuit to open, false otherwise
//
// Thread Safety: This function is pure and safe for concurrent use. It does not
// modify any state.
//
// Example:
//
// // HTTP 500 error - should open circuit
// httpErr := &FH.HttpError{...} // status 500
// if shouldOpenCircuit(httpErr) {
// // Open circuit breaker
// }
//
// // HTTP 404 error - should NOT open circuit (client error)
// notFoundErr := &FH.HttpError{...} // status 404
// if !shouldOpenCircuit(notFoundErr) {
// // Don't open circuit, this is a client error
// }
//
// // Network timeout - should open circuit
// timeoutErr := &net.OpError{Op: "dial", Err: syscall.ETIMEDOUT}
// if shouldOpenCircuit(timeoutErr) {
// // Open circuit breaker
// }
func shouldOpenCircuit(err error) bool {
if err == nil {
return false
}
// Check for HTTP errors with server status codes (5xx)
var httpErr *FH.HttpError
if errors.As(err, &httpErr) {
statusCode := httpErr.StatusCode()
// Only 5xx errors should open the circuit
// 4xx errors are client errors and shouldn't affect circuit state
return statusCode >= http.StatusInternalServerError && statusCode < 600
}
// Check for network operation errors
var opErr *net.OpError
if errors.As(err, &opErr) {
// Network timeouts should open the circuit
if opErr.Timeout() {
return true
}
// Check the underlying error
if opErr.Err != nil {
return isInfrastructureError(opErr.Err)
}
return true
}
// Check for DNS errors
var dnsErr *net.DNSError
if errors.As(err, &dnsErr) {
return true
}
// Check for URL errors (often wrap network errors)
var urlErr *url.Error
if errors.As(err, &urlErr) {
if urlErr.Timeout() {
return true
}
// Recursively check the wrapped error
return shouldOpenCircuit(urlErr.Err)
}
// Check for specific syscall errors that indicate infrastructure problems
return isInfrastructureError(err) || isTLSError(err)
}
// isInfrastructureError checks if an error is a low-level infrastructure error
// that should cause the circuit to open.
//
// This function examines syscall errors to identify network and system-level failures
// that indicate the service is unavailable or unreachable.
//
// Infrastructure errors include:
// - ECONNREFUSED: Connection refused (service not listening)
// - ECONNRESET: Connection reset by peer (service crashed or network issue)
// - ECONNABORTED: Connection aborted (network issue)
// - ENETUNREACH: Network unreachable (routing problem)
// - EHOSTUNREACH: Host unreachable (host down or network issue)
// - EPIPE: Broken pipe (connection closed unexpectedly)
// - ETIMEDOUT: Operation timed out (service not responding)
//
// Parameters:
// - err: The error to check
//
// Returns:
// - true if the error is an infrastructure error, false otherwise
//
// Thread Safety: This function is pure and safe for concurrent use.
func isInfrastructureError(err error) bool {
var syscallErr *syscall.Errno
if errors.As(err, &syscallErr) {
switch *syscallErr {
case syscall.ECONNREFUSED,
syscall.ECONNRESET,
syscall.ECONNABORTED,
syscall.ENETUNREACH,
syscall.EHOSTUNREACH,
syscall.EPIPE,
syscall.ETIMEDOUT:
return true
}
}
return false
}
// isTLSError checks if an error is a TLS/certificate error that should cause the circuit to open.
//
// TLS errors typically indicate infrastructure or configuration problems that prevent
// secure communication with the service. These errors suggest the service is not properly
// configured or accessible.
//
// TLS errors include:
// - Certificate verification failures (invalid, expired, or malformed certificates)
// - Unknown certificate authority errors (untrusted CA)
//
// Parameters:
// - err: The error to check
//
// Returns:
// - true if the error is a TLS/certificate error, false otherwise
//
// Thread Safety: This function is pure and safe for concurrent use.
func isTLSError(err error) bool {
// Certificate verification failed
var certErr *x509.CertificateInvalidError
if errors.As(err, &certErr) {
return true
}
// Unknown authority
var unknownAuthErr *x509.UnknownAuthorityError
if errors.As(err, &unknownAuthErr) {
return true
}
return false
}
// InfrastructureError is a predicate that converts errors to Options based on whether
// they should trigger circuit breaker opening.
//
// This variable provides a functional way to filter errors that represent infrastructure
// failures (network issues, server errors, timeouts, etc.) from application-level errors
// (validation errors, business logic errors, client errors).
//
// Behavior:
// - Returns Some(error) if the error should open the circuit (infrastructure failure)
// - Returns None if the error should not open the circuit (application error)
//
// Thread Safety: This function is pure and safe for concurrent use.
//
// Use this in circuit breaker configurations to determine which errors should count
// toward the failure threshold.
//
// Example:
//
// // In a circuit breaker configuration
// breaker := MakeCircuitBreaker(
// ...,
// checkError: InfrastructureError, // Only infrastructure errors open the circuit
// ...,
// )
//
// // HTTP 500 error - returns Some(error)
// result := InfrastructureError(&FH.HttpError{...}) // Some(error)
//
// // HTTP 404 error - returns None
// result := InfrastructureError(&FH.HttpError{...}) // None
var InfrastructureError = option.FromPredicate(shouldOpenCircuit)

View File

@@ -0,0 +1,503 @@
package circuitbreaker
import (
"crypto/x509"
"errors"
"fmt"
"net"
"net/http"
"net/url"
"testing"
"time"
FH "github.com/IBM/fp-go/v2/http"
"github.com/IBM/fp-go/v2/option"
"github.com/stretchr/testify/assert"
)
// TestCircuitBreakerError tests the CircuitBreakerError type
func TestCircuitBreakerError(t *testing.T) {
t.Run("Error returns formatted message with reset time", func(t *testing.T) {
resetTime := time.Date(2026, 1, 9, 12, 30, 0, 0, time.UTC)
err := &CircuitBreakerError{ResetAt: resetTime}
result := err.Error()
assert.Contains(t, result, "circuit breaker is open")
assert.Contains(t, result, "will close at")
assert.Contains(t, result, resetTime.String())
})
t.Run("Error message includes full timestamp", func(t *testing.T) {
resetTime := time.Now().Add(30 * time.Second)
err := &CircuitBreakerError{ResetAt: resetTime}
result := err.Error()
assert.NotEmpty(t, result)
assert.Contains(t, result, "circuit breaker is open")
})
}
// TestMakeCircuitBreakerError tests the constructor function
func TestMakeCircuitBreakerError(t *testing.T) {
t.Run("creates CircuitBreakerError with correct reset time", func(t *testing.T) {
resetTime := time.Date(2026, 1, 9, 13, 0, 0, 0, time.UTC)
err := MakeCircuitBreakerError(resetTime)
assert.NotNil(t, err)
cbErr, ok := err.(*CircuitBreakerError)
assert.True(t, ok, "should return *CircuitBreakerError type")
assert.Equal(t, resetTime, cbErr.ResetAt)
})
t.Run("returns error interface", func(t *testing.T) {
resetTime := time.Now().Add(1 * time.Minute)
err := MakeCircuitBreakerError(resetTime)
// Should be assignable to error interface
var _ error = err
assert.NotNil(t, err)
})
t.Run("created error can be type asserted", func(t *testing.T) {
resetTime := time.Now().Add(45 * time.Second)
err := MakeCircuitBreakerError(resetTime)
cbErr, ok := err.(*CircuitBreakerError)
assert.True(t, ok)
assert.Equal(t, resetTime, cbErr.ResetAt)
})
}
// TestAnyError tests the AnyError function
func TestAnyError(t *testing.T) {
t.Run("returns Some for non-nil error", func(t *testing.T) {
err := errors.New("test error")
result := AnyError(err)
assert.True(t, option.IsSome(result), "should return Some for non-nil error")
value := option.GetOrElse(func() error { return nil })(result)
assert.Equal(t, err, value)
})
t.Run("returns None for nil error", func(t *testing.T) {
var err error = nil
result := AnyError(err)
assert.True(t, option.IsNone(result), "should return None for nil error")
})
t.Run("works with different error types", func(t *testing.T) {
err1 := fmt.Errorf("wrapped: %w", errors.New("inner"))
err2 := &CircuitBreakerError{ResetAt: time.Now()}
result1 := AnyError(err1)
result2 := AnyError(err2)
assert.True(t, option.IsSome(result1))
assert.True(t, option.IsSome(result2))
})
}
// TestShouldOpenCircuit tests the shouldOpenCircuit function
func TestShouldOpenCircuit(t *testing.T) {
t.Run("returns false for nil error", func(t *testing.T) {
result := shouldOpenCircuit(nil)
assert.False(t, result)
})
t.Run("HTTP 5xx errors should open circuit", func(t *testing.T) {
testCases := []struct {
name string
statusCode int
expected bool
}{
{"500 Internal Server Error", 500, true},
{"501 Not Implemented", 501, true},
{"502 Bad Gateway", 502, true},
{"503 Service Unavailable", 503, true},
{"504 Gateway Timeout", 504, true},
{"599 Custom Server Error", 599, true},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
testURL, _ := url.Parse("http://example.com")
resp := &http.Response{
StatusCode: tc.statusCode,
Request: &http.Request{URL: testURL},
Body: http.NoBody,
}
httpErr := FH.StatusCodeError(resp)
result := shouldOpenCircuit(httpErr)
assert.Equal(t, tc.expected, result)
})
}
})
t.Run("HTTP 4xx errors should NOT open circuit", func(t *testing.T) {
testCases := []struct {
name string
statusCode int
expected bool
}{
{"400 Bad Request", 400, false},
{"401 Unauthorized", 401, false},
{"403 Forbidden", 403, false},
{"404 Not Found", 404, false},
{"422 Unprocessable Entity", 422, false},
{"429 Too Many Requests", 429, false},
{"499 Custom Client Error", 499, false},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
testURL, _ := url.Parse("http://example.com")
resp := &http.Response{
StatusCode: tc.statusCode,
Request: &http.Request{URL: testURL},
Body: http.NoBody,
}
httpErr := FH.StatusCodeError(resp)
result := shouldOpenCircuit(httpErr)
assert.Equal(t, tc.expected, result)
})
}
})
t.Run("HTTP 2xx and 3xx should NOT open circuit", func(t *testing.T) {
testCases := []int{200, 201, 204, 301, 302, 304}
for _, statusCode := range testCases {
t.Run(fmt.Sprintf("Status %d", statusCode), func(t *testing.T) {
testURL, _ := url.Parse("http://example.com")
resp := &http.Response{
StatusCode: statusCode,
Request: &http.Request{URL: testURL},
Body: http.NoBody,
}
httpErr := FH.StatusCodeError(resp)
result := shouldOpenCircuit(httpErr)
assert.False(t, result)
})
}
})
t.Run("network timeout errors should open circuit", func(t *testing.T) {
opErr := &net.OpError{
Op: "dial",
Err: &timeoutError{},
}
result := shouldOpenCircuit(opErr)
assert.True(t, result)
})
t.Run("DNS errors should open circuit", func(t *testing.T) {
dnsErr := &net.DNSError{
Err: "no such host",
Name: "example.com",
}
result := shouldOpenCircuit(dnsErr)
assert.True(t, result)
})
t.Run("URL timeout errors should open circuit", func(t *testing.T) {
urlErr := &url.Error{
Op: "Get",
URL: "http://example.com",
Err: &timeoutError{},
}
result := shouldOpenCircuit(urlErr)
assert.True(t, result)
})
t.Run("URL errors with nested network timeout should open circuit", func(t *testing.T) {
urlErr := &url.Error{
Op: "Get",
URL: "http://example.com",
Err: &net.OpError{
Op: "dial",
Err: &timeoutError{},
},
}
result := shouldOpenCircuit(urlErr)
assert.True(t, result)
})
t.Run("OpError with nil Err should open circuit", func(t *testing.T) {
opErr := &net.OpError{
Op: "dial",
Err: nil,
}
result := shouldOpenCircuit(opErr)
assert.True(t, result)
})
t.Run("wrapped HTTP 5xx error should open circuit", func(t *testing.T) {
testURL, _ := url.Parse("http://example.com")
resp := &http.Response{
StatusCode: 503,
Request: &http.Request{URL: testURL},
Body: http.NoBody,
}
httpErr := FH.StatusCodeError(resp)
wrappedErr := fmt.Errorf("service error: %w", httpErr)
result := shouldOpenCircuit(wrappedErr)
assert.True(t, result)
})
t.Run("wrapped HTTP 4xx error should NOT open circuit", func(t *testing.T) {
testURL, _ := url.Parse("http://example.com")
resp := &http.Response{
StatusCode: 404,
Request: &http.Request{URL: testURL},
Body: http.NoBody,
}
httpErr := FH.StatusCodeError(resp)
wrappedErr := fmt.Errorf("not found: %w", httpErr)
result := shouldOpenCircuit(wrappedErr)
assert.False(t, result)
})
t.Run("generic application error should NOT open circuit", func(t *testing.T) {
err := errors.New("validation failed")
result := shouldOpenCircuit(err)
assert.False(t, result)
})
}
// TestIsInfrastructureError tests infrastructure error detection through shouldOpenCircuit
func TestIsInfrastructureError(t *testing.T) {
t.Run("network timeout is infrastructure error", func(t *testing.T) {
opErr := &net.OpError{Op: "dial", Err: &timeoutError{}}
result := shouldOpenCircuit(opErr)
assert.True(t, result)
})
t.Run("OpError with nil Err is infrastructure error", func(t *testing.T) {
opErr := &net.OpError{Op: "dial", Err: nil}
result := shouldOpenCircuit(opErr)
assert.True(t, result)
})
t.Run("generic error returns false", func(t *testing.T) {
err := errors.New("generic error")
result := shouldOpenCircuit(err)
assert.False(t, result)
})
t.Run("wrapped network timeout is detected", func(t *testing.T) {
opErr := &net.OpError{Op: "dial", Err: &timeoutError{}}
wrappedErr := fmt.Errorf("connection failed: %w", opErr)
result := shouldOpenCircuit(wrappedErr)
assert.True(t, result)
})
}
// TestIsTLSError tests the isTLSError function
func TestIsTLSError(t *testing.T) {
t.Run("certificate invalid error is TLS error", func(t *testing.T) {
certErr := &x509.CertificateInvalidError{
Reason: x509.Expired,
}
result := isTLSError(certErr)
assert.True(t, result)
})
t.Run("unknown authority error is TLS error", func(t *testing.T) {
authErr := &x509.UnknownAuthorityError{}
result := isTLSError(authErr)
assert.True(t, result)
})
t.Run("generic error is not TLS error", func(t *testing.T) {
err := errors.New("generic error")
result := isTLSError(err)
assert.False(t, result)
})
t.Run("wrapped certificate error is detected", func(t *testing.T) {
certErr := &x509.CertificateInvalidError{
Reason: x509.Expired,
}
wrappedErr := fmt.Errorf("TLS handshake failed: %w", certErr)
result := isTLSError(wrappedErr)
assert.True(t, result)
})
t.Run("wrapped unknown authority error is detected", func(t *testing.T) {
authErr := &x509.UnknownAuthorityError{}
wrappedErr := fmt.Errorf("certificate verification failed: %w", authErr)
result := isTLSError(wrappedErr)
assert.True(t, result)
})
}
// TestInfrastructureError tests the InfrastructureError variable
func TestInfrastructureError(t *testing.T) {
t.Run("returns Some for infrastructure errors", func(t *testing.T) {
testURL, _ := url.Parse("http://example.com")
resp := &http.Response{
StatusCode: 503,
Request: &http.Request{URL: testURL},
Body: http.NoBody,
}
httpErr := FH.StatusCodeError(resp)
result := InfrastructureError(httpErr)
assert.True(t, option.IsSome(result))
})
t.Run("returns None for non-infrastructure errors", func(t *testing.T) {
testURL, _ := url.Parse("http://example.com")
resp := &http.Response{
StatusCode: 404,
Request: &http.Request{URL: testURL},
Body: http.NoBody,
}
httpErr := FH.StatusCodeError(resp)
result := InfrastructureError(httpErr)
assert.True(t, option.IsNone(result))
})
t.Run("returns None for nil error", func(t *testing.T) {
result := InfrastructureError(nil)
assert.True(t, option.IsNone(result))
})
t.Run("returns Some for network timeout", func(t *testing.T) {
opErr := &net.OpError{
Op: "dial",
Err: &timeoutError{},
}
result := InfrastructureError(opErr)
assert.True(t, option.IsSome(result))
})
}
// TestComplexErrorScenarios tests complex real-world error scenarios
func TestComplexErrorScenarios(t *testing.T) {
t.Run("deeply nested URL error with HTTP 5xx", func(t *testing.T) {
testURL, _ := url.Parse("http://api.example.com")
resp := &http.Response{
StatusCode: 502,
Request: &http.Request{URL: testURL},
Body: http.NoBody,
}
httpErr := FH.StatusCodeError(resp)
urlErr := &url.Error{
Op: "Get",
URL: "http://api.example.com",
Err: httpErr,
}
wrappedErr := fmt.Errorf("API call failed: %w", urlErr)
result := shouldOpenCircuit(wrappedErr)
assert.True(t, result, "should detect HTTP 5xx through multiple layers")
})
t.Run("URL error with timeout nested in OpError", func(t *testing.T) {
opErr := &net.OpError{
Op: "dial",
Err: &timeoutError{},
}
urlErr := &url.Error{
Op: "Post",
URL: "http://api.example.com",
Err: opErr,
}
result := shouldOpenCircuit(urlErr)
assert.True(t, result, "should detect timeout through URL error")
})
t.Run("multiple wrapped errors with infrastructure error at core", func(t *testing.T) {
coreErr := &net.OpError{Op: "dial", Err: &timeoutError{}}
layer1 := fmt.Errorf("connection attempt failed: %w", coreErr)
layer2 := fmt.Errorf("retry exhausted: %w", layer1)
layer3 := fmt.Errorf("service unavailable: %w", layer2)
result := shouldOpenCircuit(layer3)
assert.True(t, result, "should unwrap to find infrastructure error")
})
t.Run("OpError with nil Err should open circuit", func(t *testing.T) {
opErr := &net.OpError{
Op: "dial",
Err: nil,
}
result := shouldOpenCircuit(opErr)
assert.True(t, result, "OpError with nil Err should be treated as infrastructure error")
})
t.Run("mixed error types - HTTP 4xx with network error", func(t *testing.T) {
// This tests that we correctly identify the error type
testURL, _ := url.Parse("http://example.com")
resp := &http.Response{
StatusCode: 400,
Request: &http.Request{URL: testURL},
Body: http.NoBody,
}
httpErr := FH.StatusCodeError(resp)
result := shouldOpenCircuit(httpErr)
assert.False(t, result, "HTTP 4xx should not open circuit even if wrapped")
})
}
// Helper type for testing timeout errors
type timeoutError struct{}
func (e *timeoutError) Error() string { return "timeout" }
func (e *timeoutError) Timeout() bool { return true }
func (e *timeoutError) Temporary() bool { return true }

View File

@@ -0,0 +1,208 @@
// Package circuitbreaker provides metrics collection for circuit breaker state transitions and events.
package circuitbreaker
import (
"log"
"time"
"github.com/IBM/fp-go/v2/function"
)
type (
// Metrics defines the interface for collecting circuit breaker metrics and events.
// Implementations can use this interface to track circuit breaker behavior for
// monitoring, alerting, and debugging purposes.
//
// All methods accept a time.Time parameter representing when the event occurred,
// and return an IO[Void] operation that performs the metric recording when executed.
//
// Thread Safety: Implementations must be thread-safe as circuit breakers may be
// accessed concurrently from multiple goroutines.
//
// Example Usage:
//
// logger := log.New(os.Stdout, "[CircuitBreaker] ", log.LstdFlags)
// metrics := MakeMetricsFromLogger("API-Service", logger)
//
// // In circuit breaker implementation
// io.Run(metrics.Accept(time.Now())) // Record accepted request
// io.Run(metrics.Reject(time.Now())) // Record rejected request
// io.Run(metrics.Open(time.Now())) // Record circuit opening
// io.Run(metrics.Close(time.Now())) // Record circuit closing
// io.Run(metrics.Canary(time.Now())) // Record canary request
Metrics interface {
// Accept records that a request was accepted and allowed through the circuit breaker.
// This is called when the circuit is closed or in half-open state (canary request).
//
// Parameters:
// - time.Time: The timestamp when the request was accepted
//
// Returns:
// - IO[Void]: An IO operation that records the acceptance when executed
//
// Thread Safety: Must be safe to call concurrently.
Accept(time.Time) IO[Void]
// Reject records that a request was rejected because the circuit breaker is open.
// This is called when a request is blocked due to the circuit being in open state
// and the reset time has not been reached.
//
// Parameters:
// - time.Time: The timestamp when the request was rejected
//
// Returns:
// - IO[Void]: An IO operation that records the rejection when executed
//
// Thread Safety: Must be safe to call concurrently.
Reject(time.Time) IO[Void]
// Open records that the circuit breaker transitioned to the open state.
// This is called when the failure threshold is exceeded and the circuit opens
// to prevent further requests from reaching the failing service.
//
// Parameters:
// - time.Time: The timestamp when the circuit opened
//
// Returns:
// - IO[Void]: An IO operation that records the state transition when executed
//
// Thread Safety: Must be safe to call concurrently.
Open(time.Time) IO[Void]
// Close records that the circuit breaker transitioned to the closed state.
// This is called when:
// - A canary request succeeds in half-open state
// - The circuit is manually reset
// - The circuit breaker is initialized
//
// Parameters:
// - time.Time: The timestamp when the circuit closed
//
// Returns:
// - IO[Void]: An IO operation that records the state transition when executed
//
// Thread Safety: Must be safe to call concurrently.
Close(time.Time) IO[Void]
// Canary records that a canary (test) request is being attempted.
// This is called when the circuit is in half-open state and a single test request
// is allowed through to check if the service has recovered.
//
// Parameters:
// - time.Time: The timestamp when the canary request was initiated
//
// Returns:
// - IO[Void]: An IO operation that records the canary attempt when executed
//
// Thread Safety: Must be safe to call concurrently.
Canary(time.Time) IO[Void]
}
// loggingMetrics is a simple implementation of the Metrics interface that logs
// circuit breaker events using Go's standard log.Logger.
//
// This implementation is thread-safe as log.Logger is safe for concurrent use.
//
// Fields:
// - name: A human-readable name identifying the circuit breaker instance
// - logger: The log.Logger instance used for writing log messages
loggingMetrics struct {
name string
logger *log.Logger
}
)
// doLog is a helper method that creates an IO operation for logging a circuit breaker event.
// It formats the log message with the event prefix, circuit breaker name, and timestamp.
//
// Parameters:
// - prefix: The event type (e.g., "Accept", "Reject", "Open", "Close", "Canary")
// - ct: The timestamp when the event occurred
//
// Returns:
// - IO[Void]: An IO operation that logs the event when executed
//
// Thread Safety: Safe for concurrent use as log.Logger is thread-safe.
//
// Log Format: "<prefix>: <name>, <timestamp>"
// Example: "Open: API-Service, 2026-01-09 15:30:45.123 +0100 CET"
func (m *loggingMetrics) doLog(prefix string, ct time.Time) IO[Void] {
return func() Void {
m.logger.Printf("%s: %s, %s\n", prefix, m.name, ct)
return function.VOID
}
}
// Accept implements the Metrics interface for loggingMetrics.
// Logs when a request is accepted through the circuit breaker.
//
// Thread Safety: Safe for concurrent use.
func (m *loggingMetrics) Accept(ct time.Time) IO[Void] {
return m.doLog("Accept", ct)
}
// Open implements the Metrics interface for loggingMetrics.
// Logs when the circuit breaker transitions to open state.
//
// Thread Safety: Safe for concurrent use.
func (m *loggingMetrics) Open(ct time.Time) IO[Void] {
return m.doLog("Open", ct)
}
// Close implements the Metrics interface for loggingMetrics.
// Logs when the circuit breaker transitions to closed state.
//
// Thread Safety: Safe for concurrent use.
func (m *loggingMetrics) Close(ct time.Time) IO[Void] {
return m.doLog("Close", ct)
}
// Reject implements the Metrics interface for loggingMetrics.
// Logs when a request is rejected because the circuit breaker is open.
//
// Thread Safety: Safe for concurrent use.
func (m *loggingMetrics) Reject(ct time.Time) IO[Void] {
return m.doLog("Reject", ct)
}
// Canary implements the Metrics interface for loggingMetrics.
// Logs when a canary (test) request is attempted in half-open state.
//
// Thread Safety: Safe for concurrent use.
func (m *loggingMetrics) Canary(ct time.Time) IO[Void] {
return m.doLog("Canary", ct)
}
// MakeMetricsFromLogger creates a Metrics implementation that logs circuit breaker events
// using the provided log.Logger.
//
// This is a simple metrics implementation suitable for development, debugging, and
// basic production monitoring. For more sophisticated metrics collection (e.g., Prometheus,
// StatsD), implement the Metrics interface with a custom type.
//
// Parameters:
// - name: A human-readable name identifying the circuit breaker instance.
// This name appears in all log messages to distinguish between multiple circuit breakers.
// - logger: The log.Logger instance to use for writing log messages.
// If nil, this will panic when metrics are recorded.
//
// Returns:
// - Metrics: A thread-safe Metrics implementation that logs events
//
// Thread Safety: The returned Metrics implementation is safe for concurrent use
// as log.Logger is thread-safe.
//
// Example:
//
// logger := log.New(os.Stdout, "[CB] ", log.LstdFlags)
// metrics := MakeMetricsFromLogger("UserService", logger)
//
// // Use with circuit breaker
// io.Run(metrics.Open(time.Now()))
// // Output: [CB] 2026/01/09 15:30:45 Open: UserService, 2026-01-09 15:30:45.123 +0100 CET
//
// io.Run(metrics.Reject(time.Now()))
// // Output: [CB] 2026/01/09 15:30:46 Reject: UserService, 2026-01-09 15:30:46.456 +0100 CET
func MakeMetricsFromLogger(name string, logger *log.Logger) Metrics {
return &loggingMetrics{name: name, logger: logger}
}

View File

@@ -0,0 +1,506 @@
package circuitbreaker
import (
"bytes"
"log"
"strings"
"sync"
"testing"
"time"
"github.com/IBM/fp-go/v2/io"
"github.com/stretchr/testify/assert"
)
// TestMakeMetricsFromLogger tests the MakeMetricsFromLogger constructor
func TestMakeMetricsFromLogger(t *testing.T) {
t.Run("creates valid Metrics implementation", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := MakeMetricsFromLogger("TestCircuit", logger)
assert.NotNil(t, metrics, "MakeMetricsFromLogger should return non-nil Metrics")
})
t.Run("returns loggingMetrics type", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := MakeMetricsFromLogger("TestCircuit", logger)
_, ok := metrics.(*loggingMetrics)
assert.True(t, ok, "should return *loggingMetrics type")
})
t.Run("stores name correctly", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
name := "MyCircuitBreaker"
metrics := MakeMetricsFromLogger(name, logger).(*loggingMetrics)
assert.Equal(t, name, metrics.name, "name should be stored correctly")
})
t.Run("stores logger correctly", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := MakeMetricsFromLogger("TestCircuit", logger).(*loggingMetrics)
assert.Equal(t, logger, metrics.logger, "logger should be stored correctly")
})
}
// TestLoggingMetricsAccept tests the Accept method
func TestLoggingMetricsAccept(t *testing.T) {
t.Run("logs accept event with correct format", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := MakeMetricsFromLogger("TestCircuit", logger)
timestamp := time.Date(2026, 1, 9, 15, 30, 45, 0, time.UTC)
io.Run(metrics.Accept(timestamp))
output := buf.String()
assert.Contains(t, output, "Accept:", "should contain Accept prefix")
assert.Contains(t, output, "TestCircuit", "should contain circuit name")
assert.Contains(t, output, timestamp.String(), "should contain timestamp")
})
t.Run("returns IO[Void] that can be executed", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := MakeMetricsFromLogger("TestCircuit", logger)
timestamp := time.Now()
ioOp := metrics.Accept(timestamp)
assert.NotNil(t, ioOp, "should return non-nil IO operation")
result := io.Run(ioOp)
assert.NotNil(t, result, "IO operation should execute successfully")
})
t.Run("logs multiple accept events", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := MakeMetricsFromLogger("TestCircuit", logger)
time1 := time.Date(2026, 1, 9, 15, 30, 0, 0, time.UTC)
time2 := time.Date(2026, 1, 9, 15, 31, 0, 0, time.UTC)
io.Run(metrics.Accept(time1))
io.Run(metrics.Accept(time2))
output := buf.String()
lines := strings.Split(strings.TrimSpace(output), "\n")
assert.Len(t, lines, 2, "should have 2 log lines")
assert.Contains(t, lines[0], time1.String())
assert.Contains(t, lines[1], time2.String())
})
}
// TestLoggingMetricsReject tests the Reject method
func TestLoggingMetricsReject(t *testing.T) {
t.Run("logs reject event with correct format", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := MakeMetricsFromLogger("TestCircuit", logger)
timestamp := time.Date(2026, 1, 9, 15, 30, 45, 0, time.UTC)
io.Run(metrics.Reject(timestamp))
output := buf.String()
assert.Contains(t, output, "Reject:", "should contain Reject prefix")
assert.Contains(t, output, "TestCircuit", "should contain circuit name")
assert.Contains(t, output, timestamp.String(), "should contain timestamp")
})
t.Run("returns IO[Void] that can be executed", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := MakeMetricsFromLogger("TestCircuit", logger)
timestamp := time.Now()
ioOp := metrics.Reject(timestamp)
assert.NotNil(t, ioOp, "should return non-nil IO operation")
result := io.Run(ioOp)
assert.NotNil(t, result, "IO operation should execute successfully")
})
}
// TestLoggingMetricsOpen tests the Open method
func TestLoggingMetricsOpen(t *testing.T) {
t.Run("logs open event with correct format", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := MakeMetricsFromLogger("TestCircuit", logger)
timestamp := time.Date(2026, 1, 9, 15, 30, 45, 0, time.UTC)
io.Run(metrics.Open(timestamp))
output := buf.String()
assert.Contains(t, output, "Open:", "should contain Open prefix")
assert.Contains(t, output, "TestCircuit", "should contain circuit name")
assert.Contains(t, output, timestamp.String(), "should contain timestamp")
})
t.Run("returns IO[Void] that can be executed", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := MakeMetricsFromLogger("TestCircuit", logger)
timestamp := time.Now()
ioOp := metrics.Open(timestamp)
assert.NotNil(t, ioOp, "should return non-nil IO operation")
result := io.Run(ioOp)
assert.NotNil(t, result, "IO operation should execute successfully")
})
}
// TestLoggingMetricsClose tests the Close method
func TestLoggingMetricsClose(t *testing.T) {
t.Run("logs close event with correct format", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := MakeMetricsFromLogger("TestCircuit", logger)
timestamp := time.Date(2026, 1, 9, 15, 30, 45, 0, time.UTC)
io.Run(metrics.Close(timestamp))
output := buf.String()
assert.Contains(t, output, "Close:", "should contain Close prefix")
assert.Contains(t, output, "TestCircuit", "should contain circuit name")
assert.Contains(t, output, timestamp.String(), "should contain timestamp")
})
t.Run("returns IO[Void] that can be executed", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := MakeMetricsFromLogger("TestCircuit", logger)
timestamp := time.Now()
ioOp := metrics.Close(timestamp)
assert.NotNil(t, ioOp, "should return non-nil IO operation")
result := io.Run(ioOp)
assert.NotNil(t, result, "IO operation should execute successfully")
})
}
// TestLoggingMetricsCanary tests the Canary method
func TestLoggingMetricsCanary(t *testing.T) {
t.Run("logs canary event with correct format", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := MakeMetricsFromLogger("TestCircuit", logger)
timestamp := time.Date(2026, 1, 9, 15, 30, 45, 0, time.UTC)
io.Run(metrics.Canary(timestamp))
output := buf.String()
assert.Contains(t, output, "Canary:", "should contain Canary prefix")
assert.Contains(t, output, "TestCircuit", "should contain circuit name")
assert.Contains(t, output, timestamp.String(), "should contain timestamp")
})
t.Run("returns IO[Void] that can be executed", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := MakeMetricsFromLogger("TestCircuit", logger)
timestamp := time.Now()
ioOp := metrics.Canary(timestamp)
assert.NotNil(t, ioOp, "should return non-nil IO operation")
result := io.Run(ioOp)
assert.NotNil(t, result, "IO operation should execute successfully")
})
}
// TestLoggingMetricsDoLog tests the doLog helper method
func TestLoggingMetricsDoLog(t *testing.T) {
t.Run("formats log message correctly", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := &loggingMetrics{name: "TestCircuit", logger: logger}
timestamp := time.Date(2026, 1, 9, 15, 30, 45, 0, time.UTC)
io.Run(metrics.doLog("CustomEvent", timestamp))
output := buf.String()
assert.Contains(t, output, "CustomEvent:", "should contain custom prefix")
assert.Contains(t, output, "TestCircuit", "should contain circuit name")
assert.Contains(t, output, timestamp.String(), "should contain timestamp")
})
t.Run("handles different prefixes", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := &loggingMetrics{name: "TestCircuit", logger: logger}
timestamp := time.Now()
prefixes := []string{"Accept", "Reject", "Open", "Close", "Canary", "Custom"}
for _, prefix := range prefixes {
buf.Reset()
io.Run(metrics.doLog(prefix, timestamp))
output := buf.String()
assert.Contains(t, output, prefix+":", "should contain prefix: "+prefix)
}
})
}
// TestMetricsIntegration tests integration scenarios
func TestMetricsIntegration(t *testing.T) {
t.Run("logs complete circuit breaker lifecycle", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := MakeMetricsFromLogger("APICircuit", logger)
baseTime := time.Date(2026, 1, 9, 15, 30, 0, 0, time.UTC)
// Simulate circuit breaker lifecycle
io.Run(metrics.Accept(baseTime)) // Request accepted
io.Run(metrics.Accept(baseTime.Add(1 * time.Second))) // Another request
io.Run(metrics.Open(baseTime.Add(2 * time.Second))) // Circuit opens
io.Run(metrics.Reject(baseTime.Add(3 * time.Second))) // Request rejected
io.Run(metrics.Canary(baseTime.Add(30 * time.Second))) // Canary attempt
io.Run(metrics.Close(baseTime.Add(31 * time.Second))) // Circuit closes
output := buf.String()
lines := strings.Split(strings.TrimSpace(output), "\n")
assert.Len(t, lines, 6, "should have 6 log lines")
assert.Contains(t, lines[0], "Accept:")
assert.Contains(t, lines[1], "Accept:")
assert.Contains(t, lines[2], "Open:")
assert.Contains(t, lines[3], "Reject:")
assert.Contains(t, lines[4], "Canary:")
assert.Contains(t, lines[5], "Close:")
})
t.Run("distinguishes between multiple circuit breakers", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics1 := MakeMetricsFromLogger("Circuit1", logger)
metrics2 := MakeMetricsFromLogger("Circuit2", logger)
timestamp := time.Now()
io.Run(metrics1.Accept(timestamp))
io.Run(metrics2.Accept(timestamp))
output := buf.String()
assert.Contains(t, output, "Circuit1", "should contain first circuit name")
assert.Contains(t, output, "Circuit2", "should contain second circuit name")
})
}
// TestMetricsThreadSafety tests concurrent access to metrics
func TestMetricsThreadSafety(t *testing.T) {
t.Run("handles concurrent metric recording", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := MakeMetricsFromLogger("ConcurrentCircuit", logger)
var wg sync.WaitGroup
numGoroutines := 100
wg.Add(numGoroutines)
// Launch multiple goroutines recording metrics concurrently
for i := 0; i < numGoroutines; i++ {
go func(id int) {
defer wg.Done()
timestamp := time.Now()
io.Run(metrics.Accept(timestamp))
}(i)
}
wg.Wait()
output := buf.String()
lines := strings.Split(strings.TrimSpace(output), "\n")
assert.Len(t, lines, numGoroutines, "should have logged all events")
})
t.Run("handles concurrent different event types", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := MakeMetricsFromLogger("ConcurrentCircuit", logger)
var wg sync.WaitGroup
numIterations := 20
wg.Add(numIterations * 5) // 5 event types
timestamp := time.Now()
for i := 0; i < numIterations; i++ {
go func() {
defer wg.Done()
io.Run(metrics.Accept(timestamp))
}()
go func() {
defer wg.Done()
io.Run(metrics.Reject(timestamp))
}()
go func() {
defer wg.Done()
io.Run(metrics.Open(timestamp))
}()
go func() {
defer wg.Done()
io.Run(metrics.Close(timestamp))
}()
go func() {
defer wg.Done()
io.Run(metrics.Canary(timestamp))
}()
}
wg.Wait()
output := buf.String()
lines := strings.Split(strings.TrimSpace(output), "\n")
assert.Len(t, lines, numIterations*5, "should have logged all events")
})
}
// TestMetricsEdgeCases tests edge cases and special scenarios
func TestMetricsEdgeCases(t *testing.T) {
t.Run("handles empty circuit breaker name", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := MakeMetricsFromLogger("", logger)
timestamp := time.Now()
io.Run(metrics.Accept(timestamp))
output := buf.String()
assert.NotEmpty(t, output, "should still log even with empty name")
})
t.Run("handles very long circuit breaker name", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
longName := strings.Repeat("VeryLongCircuitBreakerName", 100)
metrics := MakeMetricsFromLogger(longName, logger)
timestamp := time.Now()
io.Run(metrics.Accept(timestamp))
output := buf.String()
assert.Contains(t, output, longName, "should handle long names")
})
t.Run("handles special characters in name", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
specialName := "Circuit-Breaker_123!@#$%^&*()"
metrics := MakeMetricsFromLogger(specialName, logger)
timestamp := time.Now()
io.Run(metrics.Accept(timestamp))
output := buf.String()
assert.Contains(t, output, specialName, "should handle special characters")
})
t.Run("handles zero time", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := MakeMetricsFromLogger("TestCircuit", logger)
zeroTime := time.Time{}
io.Run(metrics.Accept(zeroTime))
output := buf.String()
assert.NotEmpty(t, output, "should handle zero time")
assert.Contains(t, output, "Accept:")
})
t.Run("handles far future time", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := MakeMetricsFromLogger("TestCircuit", logger)
futureTime := time.Date(9999, 12, 31, 23, 59, 59, 0, time.UTC)
io.Run(metrics.Accept(futureTime))
output := buf.String()
assert.NotEmpty(t, output, "should handle far future time")
assert.Contains(t, output, "9999")
})
}
// TestMetricsWithCustomLogger tests metrics with different logger configurations
func TestMetricsWithCustomLogger(t *testing.T) {
t.Run("works with logger with custom prefix", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "[CB] ", 0)
metrics := MakeMetricsFromLogger("TestCircuit", logger)
timestamp := time.Now()
io.Run(metrics.Accept(timestamp))
output := buf.String()
assert.Contains(t, output, "[CB]", "should include custom prefix")
assert.Contains(t, output, "Accept:")
})
t.Run("works with logger with flags", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", log.Ldate|log.Ltime)
metrics := MakeMetricsFromLogger("TestCircuit", logger)
timestamp := time.Now()
io.Run(metrics.Accept(timestamp))
output := buf.String()
assert.NotEmpty(t, output, "should log with flags")
assert.Contains(t, output, "Accept:")
})
}
// TestMetricsIOOperations tests IO operation behavior
func TestMetricsIOOperations(t *testing.T) {
t.Run("IO operations are lazy", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := MakeMetricsFromLogger("TestCircuit", logger)
timestamp := time.Now()
// Create IO operation but don't execute it
_ = metrics.Accept(timestamp)
// Buffer should be empty because IO wasn't executed
assert.Empty(t, buf.String(), "IO operation should be lazy")
})
t.Run("IO operations execute when run", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := MakeMetricsFromLogger("TestCircuit", logger)
timestamp := time.Now()
ioOp := metrics.Accept(timestamp)
io.Run(ioOp)
assert.NotEmpty(t, buf.String(), "IO operation should execute when run")
})
t.Run("same IO operation can be executed multiple times", func(t *testing.T) {
var buf bytes.Buffer
logger := log.New(&buf, "", 0)
metrics := MakeMetricsFromLogger("TestCircuit", logger)
timestamp := time.Now()
ioOp := metrics.Accept(timestamp)
io.Run(ioOp)
io.Run(ioOp)
io.Run(ioOp)
output := buf.String()
lines := strings.Split(strings.TrimSpace(output), "\n")
assert.Len(t, lines, 3, "should execute multiple times")
})
}

118
v2/circuitbreaker/types.go Normal file
View File

@@ -0,0 +1,118 @@
// Package circuitbreaker provides a functional implementation of the circuit breaker pattern.
// A circuit breaker prevents cascading failures by temporarily blocking requests to a failing service,
// allowing it time to recover before retrying.
//
// # Thread Safety
//
// All data structures in this package are immutable except for IORef[BreakerState].
// The IORef provides thread-safe mutable state through atomic operations.
//
// Immutable types (safe for concurrent use):
// - BreakerState (Either[openState, ClosedState])
// - openState
// - ClosedState implementations (closedStateWithErrorCount, closedStateWithHistory)
// - All function types and readers
//
// Mutable types (thread-safe through atomic operations):
// - IORef[BreakerState] - provides atomic read/write/modify operations
//
// ClosedState implementations must be thread-safe. The recommended approach is to
// return new copies for all operations (Empty, AddError, AddSuccess, Check), which
// provides automatic thread safety through immutability.
package circuitbreaker
import (
"time"
"github.com/IBM/fp-go/v2/either"
"github.com/IBM/fp-go/v2/endomorphism"
"github.com/IBM/fp-go/v2/function"
"github.com/IBM/fp-go/v2/io"
"github.com/IBM/fp-go/v2/ioref"
"github.com/IBM/fp-go/v2/option"
"github.com/IBM/fp-go/v2/ord"
"github.com/IBM/fp-go/v2/pair"
"github.com/IBM/fp-go/v2/predicate"
"github.com/IBM/fp-go/v2/reader"
"github.com/IBM/fp-go/v2/retry"
"github.com/IBM/fp-go/v2/state"
)
type (
// Ord is a type alias for ord.Ord, representing a total ordering on type A.
// Used for comparing values in a consistent way.
Ord[A any] = ord.Ord[A]
// Option is a type alias for option.Option, representing an optional value.
// It can be either Some(value) or None, used for safe handling of nullable values.
Option[A any] = option.Option[A]
// Endomorphism is a type alias for endomorphism.Endomorphism, representing a function from A to A.
// Used for transformations that preserve the type.
Endomorphism[A any] = endomorphism.Endomorphism[A]
// IO is a type alias for io.IO, representing a lazy computation that produces a value of type T.
// Used for side-effectful operations that are deferred until execution.
IO[T any] = io.IO[T]
// Pair is a type alias for pair.Pair, representing a tuple of two values.
// Used for grouping related values together.
Pair[L, R any] = pair.Pair[L, R]
// IORef is a type alias for ioref.IORef, representing a mutable reference to a value of type T.
// Used for managing mutable state in a functional way with IO operations.
IORef[T any] = ioref.IORef[T]
// State is a type alias for state.State, representing a stateful computation.
// It transforms a state of type T and produces a result of type R.
State[T, R any] = state.State[T, R]
// Either is a type alias for either.Either, representing a value that can be one of two types.
// Left[E] represents an error or alternative path, Right[A] represents the success path.
Either[E, A any] = either.Either[E, A]
// Predicate is a type alias for predicate.Predicate, representing a function that tests a value.
// Returns true if the value satisfies the predicate condition, false otherwise.
Predicate[A any] = predicate.Predicate[A]
// Reader is a type alias for reader.Reader, representing a computation that depends on an environment R
// and produces a value of type A. Used for dependency injection and configuration.
Reader[R, A any] = reader.Reader[R, A]
// openState represents the internal state when the circuit breaker is open.
// In the open state, requests are blocked to give the failing service time to recover.
// The circuit breaker will transition to a half-open state (canary request) after resetAt.
openState struct {
openedAt time.Time
// resetAt is the time when the circuit breaker should attempt a canary request
// to test if the service has recovered. Calculated based on the retry policy.
resetAt time.Time
// retryStatus tracks the current retry attempt information, including the number
// of retries and the delay between attempts. Used by the retry policy to calculate
// exponential backoff or other retry strategies.
retryStatus retry.RetryStatus
// canaryRequest indicates whether the circuit is in half-open state, allowing
// a single test request (canary) to check if the service has recovered.
// If true, one request is allowed through to test the service.
// If the canary succeeds, the circuit closes; if it fails, the circuit remains open
// with an extended reset time.
canaryRequest bool
}
// BreakerState represents the current state of the circuit breaker.
// It is an Either type where:
// - Left[openState] represents an open circuit (requests are blocked)
// - Right[ClosedState] represents a closed circuit (requests are allowed through)
//
// State Transitions:
// - Closed -> Open: When failure threshold is exceeded in ClosedState
// - Open -> Half-Open: When resetAt is reached (canaryRequest = true)
// - Half-Open -> Closed: When canary request succeeds
// - Half-Open -> Open: When canary request fails (with extended resetAt)
BreakerState = Either[openState, ClosedState]
Void = function.Void
)

View File

@@ -27,13 +27,15 @@ import (
"strings"
"text/template"
S "github.com/IBM/fp-go/v2/string"
C "github.com/urfave/cli/v2"
)
const (
keyLensDir = "dir"
keyVerbose = "verbose"
lensAnnotation = "fp-go:Lens"
keyLensDir = "dir"
keyVerbose = "verbose"
keyIncludeTestFile = "include-test-files"
lensAnnotation = "fp-go:Lens"
)
var (
@@ -49,21 +51,32 @@ var (
Value: false,
Usage: "Enable verbose output",
}
flagIncludeTestFiles = &C.BoolFlag{
Name: keyIncludeTestFile,
Aliases: []string{"t"},
Value: false,
Usage: "Include test files (*_test.go) when scanning for annotated types",
}
)
// structInfo holds information about a struct that needs lens generation
type structInfo struct {
Name string
Fields []fieldInfo
Imports map[string]string // package path -> alias
Name string
TypeParams string // e.g., "[T any]" or "[K comparable, V any]" - for type declarations
TypeParamNames string // e.g., "[T]" or "[K, V]" - for type usage in function signatures
Fields []fieldInfo
Imports map[string]string // package path -> alias
}
// fieldInfo holds information about a struct field
type fieldInfo struct {
Name string
TypeName string
BaseType string // TypeName without leading * for pointer types
IsOptional bool // true if field is a pointer or has json omitempty tag
Name string
TypeName string
BaseType string // TypeName without leading * for pointer types
IsOptional bool // true if field is a pointer or has json omitempty tag
IsComparable bool // true if the type is comparable (can use ==)
IsEmbedded bool // true if this field comes from an embedded struct
}
// templateData holds data for template rendering
@@ -74,65 +87,151 @@ type templateData struct {
const lensStructTemplate = `
// {{.Name}}Lenses provides lenses for accessing fields of {{.Name}}
type {{.Name}}Lenses struct {
type {{.Name}}Lenses{{.TypeParams}} struct {
// mandatory fields
{{- range .Fields}}
{{.Name}} {{if .IsOptional}}LO.LensO[{{$.Name}}, {{.TypeName}}]{{else}}L.Lens[{{$.Name}}, {{.TypeName}}]{{end}}
{{.Name}} __lens.Lens[{{$.Name}}{{$.TypeParamNames}}, {{.TypeName}}]
{{- end}}
// optional fields
{{- range .Fields}}
{{- if .IsComparable}}
{{.Name}}O __lens_option.LensO[{{$.Name}}{{$.TypeParamNames}}, {{.TypeName}}]
{{- end}}
{{- end}}
}
// {{.Name}}RefLenses provides lenses for accessing fields of {{.Name}} via a reference to {{.Name}}
type {{.Name}}RefLenses struct {
type {{.Name}}RefLenses{{.TypeParams}} struct {
// mandatory fields
{{- range .Fields}}
{{.Name}} {{if .IsOptional}}LO.LensO[*{{$.Name}}, {{.TypeName}}]{{else}}L.Lens[*{{$.Name}}, {{.TypeName}}]{{end}}
{{.Name}} __lens.Lens[*{{$.Name}}{{$.TypeParamNames}}, {{.TypeName}}]
{{- end}}
// optional fields
{{- range .Fields}}
{{- if .IsComparable}}
{{.Name}}O __lens_option.LensO[*{{$.Name}}{{$.TypeParamNames}}, {{.TypeName}}]
{{- end}}
{{- end}}
// prisms
{{- range .Fields}}
{{.Name}}P __prism.Prism[*{{$.Name}}{{$.TypeParamNames}}, {{.TypeName}}]
{{- end}}
}
// {{.Name}}Prisms provides prisms for accessing fields of {{.Name}}
type {{.Name}}Prisms{{.TypeParams}} struct {
{{- range .Fields}}
{{.Name}} __prism.Prism[{{$.Name}}{{$.TypeParamNames}}, {{.TypeName}}]
{{- end}}
}
`
const lensConstructorTemplate = `
// Make{{.Name}}Lenses creates a new {{.Name}}Lenses with lenses for all fields
func Make{{.Name}}Lenses() {{.Name}}Lenses {
func Make{{.Name}}Lenses{{.TypeParams}}() {{.Name}}Lenses{{.TypeParamNames}} {
// mandatory lenses
{{- range .Fields}}
{{- if .IsOptional}}
iso{{.Name}} := I.FromZero[{{.TypeName}}]()
lens{{.Name}} := __lens.MakeLensWithName(
func(s {{$.Name}}{{$.TypeParamNames}}) {{.TypeName}} { return s.{{.Name}} },
func(s {{$.Name}}{{$.TypeParamNames}}, v {{.TypeName}}) {{$.Name}}{{$.TypeParamNames}} { s.{{.Name}} = v; return s },
"{{$.Name}}{{$.TypeParamNames}}.{{.Name}}",
)
{{- end}}
// optional lenses
{{- range .Fields}}
{{- if .IsComparable}}
lens{{.Name}}O := __lens_option.FromIso[{{$.Name}}{{$.TypeParamNames}}](__iso_option.FromZero[{{.TypeName}}]())(lens{{.Name}})
{{- end}}
{{- end}}
return {{.Name}}Lenses{
return {{.Name}}Lenses{{.TypeParamNames}}{
// mandatory lenses
{{- range .Fields}}
{{- if .IsOptional}}
{{.Name}}: L.MakeLens(
func(s {{$.Name}}) O.Option[{{.TypeName}}] { return iso{{.Name}}.Get(s.{{.Name}}) },
func(s {{$.Name}}, v O.Option[{{.TypeName}}]) {{$.Name}} { s.{{.Name}} = iso{{.Name}}.ReverseGet(v); return s },
),
{{- else}}
{{.Name}}: L.MakeLens(
func(s {{$.Name}}) {{.TypeName}} { return s.{{.Name}} },
func(s {{$.Name}}, v {{.TypeName}}) {{$.Name}} { s.{{.Name}} = v; return s },
),
{{.Name}}: lens{{.Name}},
{{- end}}
// optional lenses
{{- range .Fields}}
{{- if .IsComparable}}
{{.Name}}O: lens{{.Name}}O,
{{- end}}
{{- end}}
}
}
// Make{{.Name}}RefLenses creates a new {{.Name}}RefLenses with lenses for all fields
func Make{{.Name}}RefLenses() {{.Name}}RefLenses {
func Make{{.Name}}RefLenses{{.TypeParams}}() {{.Name}}RefLenses{{.TypeParamNames}} {
// mandatory lenses
{{- range .Fields}}
{{- if .IsOptional}}
iso{{.Name}} := I.FromZero[{{.TypeName}}]()
{{- end}}
{{- end}}
return {{.Name}}RefLenses{
{{- range .Fields}}
{{- if .IsOptional}}
{{.Name}}: L.MakeLensRef(
func(s *{{$.Name}}) O.Option[{{.TypeName}}] { return iso{{.Name}}.Get(s.{{.Name}}) },
func(s *{{$.Name}}, v O.Option[{{.TypeName}}]) *{{$.Name}} { s.{{.Name}} = iso{{.Name}}.ReverseGet(v); return s },
),
{{- if .IsComparable}}
lens{{.Name}} := __lens.MakeLensStrictWithName(
func(s *{{$.Name}}{{$.TypeParamNames}}) {{.TypeName}} { return s.{{.Name}} },
func(s *{{$.Name}}{{$.TypeParamNames}}, v {{.TypeName}}) *{{$.Name}}{{$.TypeParamNames}} { s.{{.Name}} = v; return s },
"(*{{$.Name}}{{$.TypeParamNames}}).{{.Name}}",
)
{{- else}}
{{.Name}}: L.MakeLensRef(
func(s *{{$.Name}}) {{.TypeName}} { return s.{{.Name}} },
func(s *{{$.Name}}, v {{.TypeName}}) *{{$.Name}} { s.{{.Name}} = v; return s },
),
lens{{.Name}} := __lens.MakeLensRefWithName(
func(s *{{$.Name}}{{$.TypeParamNames}}) {{.TypeName}} { return s.{{.Name}} },
func(s *{{$.Name}}{{$.TypeParamNames}}, v {{.TypeName}}) *{{$.Name}}{{$.TypeParamNames}} { s.{{.Name}} = v; return s },
"(*{{$.Name}}{{$.TypeParamNames}}).{{.Name}}",
)
{{- end}}
{{- end}}
// optional lenses
{{- range .Fields}}
{{- if .IsComparable}}
lens{{.Name}}O := __lens_option.FromIso[*{{$.Name}}{{$.TypeParamNames}}](__iso_option.FromZero[{{.TypeName}}]())(lens{{.Name}})
{{- end}}
{{- end}}
return {{.Name}}RefLenses{{.TypeParamNames}}{
// mandatory lenses
{{- range .Fields}}
{{.Name}}: lens{{.Name}},
{{- end}}
// optional lenses
{{- range .Fields}}
{{- if .IsComparable}}
{{.Name}}O: lens{{.Name}}O,
{{- end}}
{{- end}}
}
}
// Make{{.Name}}Prisms creates a new {{.Name}}Prisms with prisms for all fields
func Make{{.Name}}Prisms{{.TypeParams}}() {{.Name}}Prisms{{.TypeParamNames}} {
{{- range .Fields}}
{{- if .IsComparable}}
_fromNonZero{{.Name}} := __option.FromNonZero[{{.TypeName}}]()
_prism{{.Name}} := __prism.MakePrismWithName(
func(s {{$.Name}}{{$.TypeParamNames}}) __option.Option[{{.TypeName}}] { return _fromNonZero{{.Name}}(s.{{.Name}}) },
func(v {{.TypeName}}) {{$.Name}}{{$.TypeParamNames}} {
{{- if .IsEmbedded}}
var result {{$.Name}}{{$.TypeParamNames}}
result.{{.Name}} = v
return result
{{- else}}
return {{$.Name}}{{$.TypeParamNames}}{ {{.Name}}: v }
{{- end}}
},
"{{$.Name}}{{$.TypeParamNames}}.{{.Name}}",
)
{{- else}}
_prism{{.Name}} := __prism.MakePrismWithName(
func(s {{$.Name}}{{$.TypeParamNames}}) __option.Option[{{.TypeName}}] { return __option.Some(s.{{.Name}}) },
func(v {{.TypeName}}) {{$.Name}}{{$.TypeParamNames}} {
{{- if .IsEmbedded}}
var result {{$.Name}}{{$.TypeParamNames}}
result.{{.Name}} = v
return result
{{- else}}
return {{$.Name}}{{$.TypeParamNames}}{ {{.Name}}: v }
{{- end}}
},
"{{$.Name}}{{$.TypeParamNames}}.{{.Name}}",
)
{{- end}}
{{- end}}
return {{.Name}}Prisms{{.TypeParamNames}} {
{{- range .Fields}}
{{.Name}}: _prism{{.Name}},
{{- end}}
}
}
@@ -257,6 +356,260 @@ func isPointerType(expr ast.Expr) bool {
return ok
}
// isComparableType checks if a type expression represents a comparable type.
// Comparable types in Go include:
// - Basic types (bool, numeric types, string)
// - Pointer types
// - Channel types
// - Interface types
// - Structs where all fields are comparable
// - Arrays where the element type is comparable
//
// Non-comparable types include:
// - Slices
// - Maps
// - Functions
//
// typeParams is a map of type parameter names to their constraints (e.g., "T" -> "any", "K" -> "comparable")
func isComparableType(expr ast.Expr, typeParams map[string]string) bool {
switch t := expr.(type) {
case *ast.Ident:
// Check if this is a type parameter
if constraint, isTypeParam := typeParams[t.Name]; isTypeParam {
// Type parameter - check its constraint
return constraint == "comparable"
}
// Basic types and named types
// We assume named types are comparable unless they're known non-comparable types
name := t.Name
// Known non-comparable built-in types
if name == "error" {
// error is an interface, which is comparable
return true
}
// Most basic types and named types are comparable
// We can't determine if a custom type is comparable without type checking,
// so we assume it is (conservative approach)
return true
case *ast.StarExpr:
// Pointer types are always comparable
return true
case *ast.ArrayType:
// Arrays are comparable if their element type is comparable
if t.Len == nil {
// This is a slice (no length), slices are not comparable
return false
}
// Fixed-size array, check element type
return isComparableType(t.Elt, typeParams)
case *ast.MapType:
// Maps are not comparable
return false
case *ast.FuncType:
// Functions are not comparable
return false
case *ast.InterfaceType:
// Interface types are comparable
return true
case *ast.StructType:
// Structs are comparable if all fields are comparable
// We can't easily determine this without full type information,
// so we conservatively return false for struct literals
return false
case *ast.SelectorExpr:
// Qualified identifier (e.g., pkg.Type)
// We can't determine comparability without type information
// Check for known non-comparable types from standard library
if ident, ok := t.X.(*ast.Ident); ok {
pkgName := ident.Name
typeName := t.Sel.Name
// Check for known non-comparable types
if pkgName == "context" && typeName == "Context" {
// context.Context is an interface, which is comparable
return true
}
// For other qualified types, we assume they're comparable
// This is a conservative approach
}
return true
case *ast.IndexExpr, *ast.IndexListExpr:
// Generic types - we can't determine comparability without type information
// For common generic types, we can make educated guesses
var baseExpr ast.Expr
if idx, ok := t.(*ast.IndexExpr); ok {
baseExpr = idx.X
} else if idxList, ok := t.(*ast.IndexListExpr); ok {
baseExpr = idxList.X
}
if sel, ok := baseExpr.(*ast.SelectorExpr); ok {
if ident, ok := sel.X.(*ast.Ident); ok {
pkgName := ident.Name
typeName := sel.Sel.Name
// Check for known non-comparable generic types
if pkgName == "option" && typeName == "Option" {
// Option types are not comparable (they contain a slice internally)
return false
}
if pkgName == "either" && typeName == "Either" {
// Either types are not comparable
return false
}
}
}
// For other generic types, conservatively assume not comparable
log.Printf("Not comparable type: %v\n", t)
return false
case *ast.ChanType:
// Channel types are comparable
return true
default:
// Unknown type, conservatively assume not comparable
return false
}
}
// embeddedFieldResult holds both the field info and its AST type for import extraction
type embeddedFieldResult struct {
fieldInfo fieldInfo
fieldType ast.Expr
}
// extractEmbeddedFields extracts fields from an embedded struct type
// It returns a slice of embeddedFieldResult for all exported fields in the embedded struct
// typeParamsMap contains the type parameters of the parent struct (for checking comparability)
func extractEmbeddedFields(embedType ast.Expr, fileImports map[string]string, file *ast.File, typeParamsMap map[string]string) []embeddedFieldResult {
var results []embeddedFieldResult
// Get the type name of the embedded field
var typeName string
var typeIdent *ast.Ident
switch t := embedType.(type) {
case *ast.Ident:
// Direct embedded type: type MyStruct struct { EmbeddedType }
typeName = t.Name
typeIdent = t
case *ast.StarExpr:
// Pointer embedded type: type MyStruct struct { *EmbeddedType }
if ident, ok := t.X.(*ast.Ident); ok {
typeName = ident.Name
typeIdent = ident
}
case *ast.SelectorExpr:
// Qualified embedded type: type MyStruct struct { pkg.EmbeddedType }
// We can't easily resolve this without full type information
// For now, skip these
return results
}
if S.IsEmpty(typeName) || typeIdent == nil {
return results
}
// Find the struct definition in the same file
var embeddedStructType *ast.StructType
ast.Inspect(file, func(n ast.Node) bool {
if ts, ok := n.(*ast.TypeSpec); ok {
if ts.Name.Name == typeName {
if st, ok := ts.Type.(*ast.StructType); ok {
embeddedStructType = st
return false
}
}
}
return true
})
if embeddedStructType == nil {
// Struct not found in this file, might be from another package
return results
}
// Extract fields from the embedded struct
for _, field := range embeddedStructType.Fields.List {
// Skip embedded fields within embedded structs (for now, to avoid infinite recursion)
if len(field.Names) == 0 {
continue
}
for _, name := range field.Names {
// Only export lenses for exported fields
if name.IsExported() {
fieldTypeName := getTypeName(field.Type)
isOptional := false
baseType := fieldTypeName
// Check if field is optional
if isPointerType(field.Type) {
isOptional = true
baseType = strings.TrimPrefix(fieldTypeName, "*")
} else if hasOmitEmpty(field.Tag) {
isOptional = true
}
// Check if the type is comparable
isComparable := isComparableType(field.Type, typeParamsMap)
results = append(results, embeddedFieldResult{
fieldInfo: fieldInfo{
Name: name.Name,
TypeName: fieldTypeName,
BaseType: baseType,
IsOptional: isOptional,
IsComparable: isComparable,
IsEmbedded: true,
},
fieldType: field.Type,
})
}
}
}
return results
}
// extractTypeParams extracts type parameters from a type spec
// Returns two strings: full params like "[T any]" and names only like "[T]"
func extractTypeParams(typeSpec *ast.TypeSpec) (string, string) {
if typeSpec.TypeParams == nil || len(typeSpec.TypeParams.List) == 0 {
return "", ""
}
var params []string
var names []string
for _, field := range typeSpec.TypeParams.List {
for _, name := range field.Names {
constraint := getTypeName(field.Type)
params = append(params, name.Name+" "+constraint)
names = append(names, name.Name)
}
}
fullParams := "[" + strings.Join(params, ", ") + "]"
nameParams := "[" + strings.Join(names, ", ") + "]"
return fullParams, nameParams
}
// buildTypeParamsMap creates a map of type parameter names to their constraints
// e.g., for "type Box[T any, K comparable]", returns {"T": "any", "K": "comparable"}
func buildTypeParamsMap(typeSpec *ast.TypeSpec) map[string]string {
typeParamsMap := make(map[string]string)
if typeSpec.TypeParams == nil || len(typeSpec.TypeParams.List) == 0 {
return typeParamsMap
}
for _, field := range typeSpec.TypeParams.List {
constraint := getTypeName(field.Type)
for _, name := range field.Names {
typeParamsMap[name.Name] = constraint
}
}
return typeParamsMap
}
// parseFile parses a Go file and extracts structs with lens annotations
func parseFile(filename string) ([]structInfo, string, error) {
fset := token.NewFileSet()
@@ -320,9 +673,27 @@ func parseFile(filename string) ([]structInfo, string, error) {
var fields []fieldInfo
structImports := make(map[string]string)
// Build type parameters map for this struct
typeParamsMap := buildTypeParamsMap(typeSpec)
for _, field := range structType.Fields.List {
if len(field.Names) == 0 {
// Embedded field, skip for now
// Embedded field - promote its fields
embeddedResults := extractEmbeddedFields(field.Type, fileImports, node, typeParamsMap)
for _, embResult := range embeddedResults {
// Extract imports from embedded field's type
fieldImports := make(map[string]string)
extractImports(embResult.fieldType, fieldImports)
// Resolve package names to full import paths
for pkgName := range fieldImports {
if importPath, ok := fileImports[pkgName]; ok {
structImports[importPath] = pkgName
}
}
fields = append(fields, embResult.fieldInfo)
}
continue
}
for _, name := range field.Names {
@@ -331,6 +702,7 @@ func parseFile(filename string) ([]structInfo, string, error) {
typeName := getTypeName(field.Type)
isOptional := false
baseType := typeName
isComparable := false
// Check if field is optional:
// 1. Pointer types are always optional
@@ -344,6 +716,11 @@ func parseFile(filename string) ([]structInfo, string, error) {
isOptional = true
}
// Check if the type is comparable (for non-optional fields)
// For optional fields, we don't need to check since they use LensO
isComparable = isComparableType(field.Type, typeParamsMap)
// log.Printf("field %s, type: %v, isComparable: %b\n", name, field.Type, isComparable)
// Extract imports from this field's type
fieldImports := make(map[string]string)
extractImports(field.Type, fieldImports)
@@ -356,20 +733,24 @@ func parseFile(filename string) ([]structInfo, string, error) {
}
fields = append(fields, fieldInfo{
Name: name.Name,
TypeName: typeName,
BaseType: baseType,
IsOptional: isOptional,
Name: name.Name,
TypeName: typeName,
BaseType: baseType,
IsOptional: isOptional,
IsComparable: isComparable,
})
}
}
}
if len(fields) > 0 {
typeParams, typeParamNames := extractTypeParams(typeSpec)
structs = append(structs, structInfo{
Name: typeSpec.Name.Name,
Fields: fields,
Imports: structImports,
Name: typeSpec.Name.Name,
TypeParams: typeParams,
TypeParamNames: typeParamNames,
Fields: fields,
Imports: structImports,
})
}
@@ -380,7 +761,7 @@ func parseFile(filename string) ([]structInfo, string, error) {
}
// generateLensHelpers scans a directory for Go files and generates lens code
func generateLensHelpers(dir, filename string, verbose bool) error {
func generateLensHelpers(dir, filename string, verbose, includeTestFiles bool) error {
// Get absolute path
absDir, err := filepath.Abs(dir)
if err != nil {
@@ -401,21 +782,34 @@ func generateLensHelpers(dir, filename string, verbose bool) error {
log.Printf("Found %d Go files", len(files))
}
// Parse all files and collect structs
var allStructs []structInfo
// Parse all files and collect structs, separating test and non-test files
var regularStructs []structInfo
var testStructs []structInfo
var packageName string
for _, file := range files {
// Skip generated files and test files
if strings.HasSuffix(file, "_test.go") || strings.Contains(file, "gen.go") {
baseName := filepath.Base(file)
// Skip generated lens files (both regular and test)
if strings.HasPrefix(baseName, "gen_lens") && strings.HasSuffix(baseName, ".go") {
if verbose {
log.Printf("Skipping file: %s", filepath.Base(file))
log.Printf("Skipping generated lens file: %s", baseName)
}
continue
}
isTestFile := strings.HasSuffix(file, "_test.go")
// Skip test files unless includeTestFiles is true
if isTestFile && !includeTestFiles {
if verbose {
log.Printf("Skipping test file: %s", baseName)
}
continue
}
if verbose {
log.Printf("Parsing file: %s", filepath.Base(file))
log.Printf("Parsing file: %s", baseName)
}
structs, pkg, err := parseFile(file)
@@ -425,27 +819,52 @@ func generateLensHelpers(dir, filename string, verbose bool) error {
}
if verbose && len(structs) > 0 {
log.Printf("Found %d annotated struct(s) in %s", len(structs), filepath.Base(file))
log.Printf("Found %d annotated struct(s) in %s", len(structs), baseName)
for _, s := range structs {
log.Printf(" - %s (%d fields)", s.Name, len(s.Fields))
}
}
if packageName == "" {
if S.IsEmpty(packageName) {
packageName = pkg
}
allStructs = append(allStructs, structs...)
// Separate structs based on source file type
if isTestFile {
testStructs = append(testStructs, structs...)
} else {
regularStructs = append(regularStructs, structs...)
}
}
if len(allStructs) == 0 {
if len(regularStructs) == 0 && len(testStructs) == 0 {
log.Printf("No structs with %s annotation found in %s", lensAnnotation, absDir)
return nil
}
// Generate regular lens file if there are regular structs
if len(regularStructs) > 0 {
if err := generateLensFile(absDir, filename, packageName, regularStructs, verbose); err != nil {
return err
}
}
// Generate test lens file if there are test structs
if len(testStructs) > 0 {
testFilename := strings.TrimSuffix(filename, ".go") + "_test.go"
if err := generateLensFile(absDir, testFilename, packageName, testStructs, verbose); err != nil {
return err
}
}
return nil
}
// generateLensFile generates a lens file for the given structs
func generateLensFile(absDir, filename, packageName string, structs []structInfo, verbose bool) error {
// Collect all unique imports from all structs
allImports := make(map[string]string) // import path -> alias
for _, s := range allStructs {
for _, s := range structs {
for importPath, alias := range s.Imports {
allImports[importPath] = alias
}
@@ -459,7 +878,7 @@ func generateLensHelpers(dir, filename string, verbose bool) error {
}
defer f.Close()
log.Printf("Generating lens code in [%s] for package [%s] with [%d] structs ...", outPath, packageName, len(allStructs))
log.Printf("Generating lens code in [%s] for package [%s] with [%d] structs ...", outPath, packageName, len(structs))
// Write header
writePackage(f, packageName)
@@ -467,10 +886,11 @@ func generateLensHelpers(dir, filename string, verbose bool) error {
// Write imports
f.WriteString("import (\n")
// Standard fp-go imports always needed
f.WriteString("\tL \"github.com/IBM/fp-go/v2/optics/lens\"\n")
f.WriteString("\tLO \"github.com/IBM/fp-go/v2/optics/lens/option\"\n")
f.WriteString("\tO \"github.com/IBM/fp-go/v2/option\"\n")
f.WriteString("\tI \"github.com/IBM/fp-go/v2/optics/iso/option\"\n")
f.WriteString("\t__lens \"github.com/IBM/fp-go/v2/optics/lens\"\n")
f.WriteString("\t__option \"github.com/IBM/fp-go/v2/option\"\n")
f.WriteString("\t__prism \"github.com/IBM/fp-go/v2/optics/prism\"\n")
f.WriteString("\t__lens_option \"github.com/IBM/fp-go/v2/optics/lens/option\"\n")
f.WriteString("\t__iso_option \"github.com/IBM/fp-go/v2/optics/iso/option\"\n")
// Add additional imports collected from field types
for importPath, alias := range allImports {
@@ -480,7 +900,7 @@ func generateLensHelpers(dir, filename string, verbose bool) error {
f.WriteString(")\n")
// Generate lens code for each struct using templates
for _, s := range allStructs {
for _, s := range structs {
var buf bytes.Buffer
// Generate struct type
@@ -512,12 +932,14 @@ func LensCommand() *C.Command {
flagLensDir,
flagFilename,
flagVerbose,
flagIncludeTestFiles,
},
Action: func(ctx *C.Context) error {
return generateLensHelpers(
ctx.String(keyLensDir),
ctx.String(keyFilename),
ctx.Bool(keyVerbose),
ctx.Bool(keyIncludeTestFile),
)
},
}

View File

@@ -25,6 +25,7 @@ import (
"strings"
"testing"
S "github.com/IBM/fp-go/v2/string"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -60,7 +61,7 @@ func TestHasLensAnnotation(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
var doc *ast.CommentGroup
if tt.comment != "" {
if S.IsNonEmpty(tt.comment) {
doc = &ast.CommentGroup{
List: []*ast.Comment{
{Text: tt.comment},
@@ -168,6 +169,91 @@ func TestIsPointerType(t *testing.T) {
}
}
func TestIsComparableType(t *testing.T) {
tests := []struct {
name string
code string
expected bool
}{
{
name: "basic type - string",
code: "type T struct { F string }",
expected: true,
},
{
name: "basic type - int",
code: "type T struct { F int }",
expected: true,
},
{
name: "basic type - bool",
code: "type T struct { F bool }",
expected: true,
},
{
name: "pointer type",
code: "type T struct { F *string }",
expected: true,
},
{
name: "slice type - not comparable",
code: "type T struct { F []string }",
expected: false,
},
{
name: "map type - not comparable",
code: "type T struct { F map[string]int }",
expected: false,
},
{
name: "array type - comparable if element is",
code: "type T struct { F [5]int }",
expected: true,
},
{
name: "interface type",
code: "type T struct { F interface{} }",
expected: true,
},
{
name: "channel type",
code: "type T struct { F chan int }",
expected: true,
},
{
name: "function type - not comparable",
code: "type T struct { F func() }",
expected: false,
},
{
name: "struct literal - conservatively not comparable",
code: "type T struct { F struct{ X int } }",
expected: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
fset := token.NewFileSet()
file, err := parser.ParseFile(fset, "", "package test\n"+tt.code, 0)
require.NoError(t, err)
var fieldType ast.Expr
ast.Inspect(file, func(n ast.Node) bool {
if field, ok := n.(*ast.Field); ok && len(field.Names) > 0 {
fieldType = field.Type
return false
}
return true
})
require.NotNil(t, fieldType)
result := isComparableType(fieldType, map[string]string{})
assert.Equal(t, tt.expected, result)
})
}
}
func TestHasOmitEmpty(t *testing.T) {
tests := []struct {
name string
@@ -204,7 +290,7 @@ func TestHasOmitEmpty(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
var tag *ast.BasicLit
if tt.tag != "" {
if S.IsNonEmpty(tt.tag) {
tag = &ast.BasicLit{
Value: tt.tag,
}
@@ -241,7 +327,7 @@ type Other struct {
}
`
err := os.WriteFile(testFile, []byte(testCode), 0644)
err := os.WriteFile(testFile, []byte(testCode), 0o644)
require.NoError(t, err)
// Parse the file
@@ -295,7 +381,7 @@ type Config struct {
}
`
err := os.WriteFile(testFile, []byte(testCode), 0644)
err := os.WriteFile(testFile, []byte(testCode), 0o644)
require.NoError(t, err)
// Parse the file
@@ -337,6 +423,167 @@ type Config struct {
assert.False(t, config.Fields[4].IsOptional, "Required field without omitempty should not be optional")
}
func TestParseFileWithComparableTypes(t *testing.T) {
// Create a temporary test file
tmpDir := t.TempDir()
testFile := filepath.Join(tmpDir, "test.go")
testCode := `package testpkg
// fp-go:Lens
type TypeTest struct {
Name string
Age int
Pointer *string
Slice []string
Map map[string]int
Channel chan int
}
`
err := os.WriteFile(testFile, []byte(testCode), 0o644)
require.NoError(t, err)
// Parse the file
structs, pkg, err := parseFile(testFile)
require.NoError(t, err)
// Verify results
assert.Equal(t, "testpkg", pkg)
assert.Len(t, structs, 1)
// Check TypeTest struct
typeTest := structs[0]
assert.Equal(t, "TypeTest", typeTest.Name)
assert.Len(t, typeTest.Fields, 6)
// Name - string is comparable
assert.Equal(t, "Name", typeTest.Fields[0].Name)
assert.Equal(t, "string", typeTest.Fields[0].TypeName)
assert.False(t, typeTest.Fields[0].IsOptional)
assert.True(t, typeTest.Fields[0].IsComparable, "string should be comparable")
// Age - int is comparable
assert.Equal(t, "Age", typeTest.Fields[1].Name)
assert.Equal(t, "int", typeTest.Fields[1].TypeName)
assert.False(t, typeTest.Fields[1].IsOptional)
assert.True(t, typeTest.Fields[1].IsComparable, "int should be comparable")
// Pointer - pointer is optional, IsComparable not checked for optional fields
assert.Equal(t, "Pointer", typeTest.Fields[2].Name)
assert.Equal(t, "*string", typeTest.Fields[2].TypeName)
assert.True(t, typeTest.Fields[2].IsOptional)
// Slice - not comparable
assert.Equal(t, "Slice", typeTest.Fields[3].Name)
assert.Equal(t, "[]string", typeTest.Fields[3].TypeName)
assert.False(t, typeTest.Fields[3].IsOptional)
assert.False(t, typeTest.Fields[3].IsComparable, "slice should not be comparable")
// Map - not comparable
assert.Equal(t, "Map", typeTest.Fields[4].Name)
assert.Equal(t, "map[string]int", typeTest.Fields[4].TypeName)
assert.False(t, typeTest.Fields[4].IsOptional)
assert.False(t, typeTest.Fields[4].IsComparable, "map should not be comparable")
// Channel - comparable (note: getTypeName returns "any" for channel types, but isComparableType correctly identifies them)
assert.Equal(t, "Channel", typeTest.Fields[5].Name)
assert.Equal(t, "any", typeTest.Fields[5].TypeName) // getTypeName doesn't handle chan types specifically
assert.False(t, typeTest.Fields[5].IsOptional)
assert.True(t, typeTest.Fields[5].IsComparable, "channel should be comparable")
}
func TestLensRefTemplatesWithComparable(t *testing.T) {
s := structInfo{
Name: "TestStruct",
Fields: []fieldInfo{
{Name: "Name", TypeName: "string", IsOptional: false, IsComparable: true},
{Name: "Age", TypeName: "int", IsOptional: false, IsComparable: true},
{Name: "Data", TypeName: "[]byte", IsOptional: false, IsComparable: false},
{Name: "Pointer", TypeName: "*string", IsOptional: true, IsComparable: false},
},
}
// Test constructor template for RefLenses
var constructorBuf bytes.Buffer
err := constructorTmpl.Execute(&constructorBuf, s)
require.NoError(t, err)
constructorStr := constructorBuf.String()
// Check that MakeLensStrict is used for comparable types in RefLenses
assert.Contains(t, constructorStr, "func MakeTestStructRefLenses() TestStructRefLenses")
// Name field - comparable, should use MakeLensStrict
assert.Contains(t, constructorStr, "lensName := __lens.MakeLensStrictWithName(",
"comparable field Name should use MakeLensStrictWithName in RefLenses")
// Age field - comparable, should use MakeLensStrict
assert.Contains(t, constructorStr, "lensAge := __lens.MakeLensStrictWithName(",
"comparable field Age should use MakeLensStrictWithName in RefLenses")
// Data field - not comparable, should use MakeLensRef
assert.Contains(t, constructorStr, "lensData := __lens.MakeLensRefWithName(",
"non-comparable field Data should use MakeLensRefWithName in RefLenses")
}
func TestGenerateLensHelpersWithComparable(t *testing.T) {
// Create a temporary directory with test files
tmpDir := t.TempDir()
testCode := `package testpkg
// fp-go:Lens
type TestStruct struct {
Name string
Count int
Data []byte
}
`
testFile := filepath.Join(tmpDir, "test.go")
err := os.WriteFile(testFile, []byte(testCode), 0o644)
require.NoError(t, err)
// Generate lens code
outputFile := "gen.go"
err = generateLensHelpers(tmpDir, outputFile, false, false)
require.NoError(t, err)
// Verify the generated file exists
genPath := filepath.Join(tmpDir, outputFile)
_, err = os.Stat(genPath)
require.NoError(t, err)
// Read and verify the generated content
content, err := os.ReadFile(genPath)
require.NoError(t, err)
contentStr := string(content)
// Check for expected content in RefLenses
assert.Contains(t, contentStr, "MakeTestStructRefLenses")
// Name and Count are comparable, should use MakeLensStrictWithName
assert.Contains(t, contentStr, "__lens.MakeLensStrictWithName",
"comparable fields should use MakeLensStrictWithName in RefLenses")
// Data is not comparable (slice), should use MakeLensRefWithName
assert.Contains(t, contentStr, "__lens.MakeLensRefWithName",
"non-comparable fields should use MakeLensRefWithName in RefLenses")
// Verify the pattern appears for Name field (comparable)
namePattern := "lensName := __lens.MakeLensStrictWithName("
assert.Contains(t, contentStr, namePattern,
"Name field should use MakeLensStrictWithName")
// Verify the pattern appears for Data field (not comparable)
dataPattern := "lensData := __lens.MakeLensRefWithName("
assert.Contains(t, contentStr, dataPattern,
"Data field should use MakeLensRefWithName")
}
func TestGenerateLensHelpers(t *testing.T) {
// Create a temporary directory with test files
tmpDir := t.TempDir()
@@ -351,12 +598,12 @@ type TestStruct struct {
`
testFile := filepath.Join(tmpDir, "test.go")
err := os.WriteFile(testFile, []byte(testCode), 0644)
err := os.WriteFile(testFile, []byte(testCode), 0o644)
require.NoError(t, err)
// Generate lens code
outputFile := "gen.go"
err = generateLensHelpers(tmpDir, outputFile, false)
err = generateLensHelpers(tmpDir, outputFile, false, false)
require.NoError(t, err)
// Verify the generated file exists
@@ -373,11 +620,11 @@ type TestStruct struct {
// Check for expected content
assert.Contains(t, contentStr, "package testpkg")
assert.Contains(t, contentStr, "Code generated by go generate")
assert.Contains(t, contentStr, "TestStructLens")
assert.Contains(t, contentStr, "MakeTestStructLens")
assert.Contains(t, contentStr, "L.Lens[TestStruct, string]")
assert.Contains(t, contentStr, "LO.LensO[TestStruct, *int]")
assert.Contains(t, contentStr, "I.FromZero")
assert.Contains(t, contentStr, "TestStructLenses")
assert.Contains(t, contentStr, "MakeTestStructLenses")
assert.Contains(t, contentStr, "__lens.Lens[TestStruct, string]")
assert.Contains(t, contentStr, "__lens_option.LensO[TestStruct, *int]")
assert.Contains(t, contentStr, "__iso_option.FromZero")
}
func TestGenerateLensHelpersNoAnnotations(t *testing.T) {
@@ -393,12 +640,12 @@ type TestStruct struct {
`
testFile := filepath.Join(tmpDir, "test.go")
err := os.WriteFile(testFile, []byte(testCode), 0644)
err := os.WriteFile(testFile, []byte(testCode), 0o644)
require.NoError(t, err)
// Generate lens code (should not create file)
outputFile := "gen.go"
err = generateLensHelpers(tmpDir, outputFile, false)
err = generateLensHelpers(tmpDir, outputFile, false, false)
require.NoError(t, err)
// Verify the generated file does not exist
@@ -411,8 +658,8 @@ func TestLensTemplates(t *testing.T) {
s := structInfo{
Name: "TestStruct",
Fields: []fieldInfo{
{Name: "Name", TypeName: "string", IsOptional: false},
{Name: "Value", TypeName: "*int", IsOptional: true},
{Name: "Name", TypeName: "string", IsOptional: false, IsComparable: true},
{Name: "Value", TypeName: "*int", IsOptional: true, IsComparable: true},
},
}
@@ -423,8 +670,10 @@ func TestLensTemplates(t *testing.T) {
structStr := structBuf.String()
assert.Contains(t, structStr, "type TestStructLenses struct")
assert.Contains(t, structStr, "Name L.Lens[TestStruct, string]")
assert.Contains(t, structStr, "Value LO.LensO[TestStruct, *int]")
assert.Contains(t, structStr, "Name __lens.Lens[TestStruct, string]")
assert.Contains(t, structStr, "NameO __lens_option.LensO[TestStruct, string]")
assert.Contains(t, structStr, "Value __lens.Lens[TestStruct, *int]")
assert.Contains(t, structStr, "ValueO __lens_option.LensO[TestStruct, *int]")
// Test constructor template
var constructorBuf bytes.Buffer
@@ -434,19 +683,21 @@ func TestLensTemplates(t *testing.T) {
constructorStr := constructorBuf.String()
assert.Contains(t, constructorStr, "func MakeTestStructLenses() TestStructLenses")
assert.Contains(t, constructorStr, "return TestStructLenses{")
assert.Contains(t, constructorStr, "Name: L.MakeLens(")
assert.Contains(t, constructorStr, "Value: L.MakeLens(")
assert.Contains(t, constructorStr, "I.FromZero")
assert.Contains(t, constructorStr, "Name: lensName,")
assert.Contains(t, constructorStr, "NameO: lensNameO,")
assert.Contains(t, constructorStr, "Value: lensValue,")
assert.Contains(t, constructorStr, "ValueO: lensValueO,")
assert.Contains(t, constructorStr, "__iso_option.FromZero")
}
func TestLensTemplatesWithOmitEmpty(t *testing.T) {
s := structInfo{
Name: "ConfigStruct",
Fields: []fieldInfo{
{Name: "Name", TypeName: "string", IsOptional: false},
{Name: "Value", TypeName: "string", IsOptional: true}, // non-pointer with omitempty
{Name: "Count", TypeName: "int", IsOptional: true}, // non-pointer with omitempty
{Name: "Pointer", TypeName: "*string", IsOptional: true}, // pointer
{Name: "Name", TypeName: "string", IsOptional: false, IsComparable: true},
{Name: "Value", TypeName: "string", IsOptional: true, IsComparable: true}, // non-pointer with omitempty
{Name: "Count", TypeName: "int", IsOptional: true, IsComparable: true}, // non-pointer with omitempty
{Name: "Pointer", TypeName: "*string", IsOptional: true, IsComparable: true}, // pointer
},
}
@@ -457,10 +708,14 @@ func TestLensTemplatesWithOmitEmpty(t *testing.T) {
structStr := structBuf.String()
assert.Contains(t, structStr, "type ConfigStructLenses struct")
assert.Contains(t, structStr, "Name L.Lens[ConfigStruct, string]")
assert.Contains(t, structStr, "Value LO.LensO[ConfigStruct, string]", "non-pointer with omitempty should use LensO")
assert.Contains(t, structStr, "Count LO.LensO[ConfigStruct, int]", "non-pointer with omitempty should use LensO")
assert.Contains(t, structStr, "Pointer LO.LensO[ConfigStruct, *string]")
assert.Contains(t, structStr, "Name __lens.Lens[ConfigStruct, string]")
assert.Contains(t, structStr, "NameO __lens_option.LensO[ConfigStruct, string]")
assert.Contains(t, structStr, "Value __lens.Lens[ConfigStruct, string]")
assert.Contains(t, structStr, "ValueO __lens_option.LensO[ConfigStruct, string]", "comparable non-pointer with omitempty should have optional lens")
assert.Contains(t, structStr, "Count __lens.Lens[ConfigStruct, int]")
assert.Contains(t, structStr, "CountO __lens_option.LensO[ConfigStruct, int]", "comparable non-pointer with omitempty should have optional lens")
assert.Contains(t, structStr, "Pointer __lens.Lens[ConfigStruct, *string]")
assert.Contains(t, structStr, "PointerO __lens_option.LensO[ConfigStruct, *string]")
// Test constructor template
var constructorBuf bytes.Buffer
@@ -469,9 +724,9 @@ func TestLensTemplatesWithOmitEmpty(t *testing.T) {
constructorStr := constructorBuf.String()
assert.Contains(t, constructorStr, "func MakeConfigStructLenses() ConfigStructLenses")
assert.Contains(t, constructorStr, "isoValue := I.FromZero[string]()")
assert.Contains(t, constructorStr, "isoCount := I.FromZero[int]()")
assert.Contains(t, constructorStr, "isoPointer := I.FromZero[*string]()")
assert.Contains(t, constructorStr, "__iso_option.FromZero[string]()")
assert.Contains(t, constructorStr, "__iso_option.FromZero[int]()")
assert.Contains(t, constructorStr, "__iso_option.FromZero[*string]()")
}
func TestLensCommandFlags(t *testing.T) {
@@ -480,12 +735,12 @@ func TestLensCommandFlags(t *testing.T) {
assert.Equal(t, "lens", cmd.Name)
assert.Equal(t, "generate lens code for annotated structs", cmd.Usage)
assert.Contains(t, strings.ToLower(cmd.Description), "fp-go:lens")
assert.Contains(t, strings.ToLower(cmd.Description), "lenso")
assert.Contains(t, strings.ToLower(cmd.Description), "lenso", "Description should mention LensO for optional lenses")
// Check flags
assert.Len(t, cmd.Flags, 3)
assert.Len(t, cmd.Flags, 4)
var hasDir, hasFilename, hasVerbose bool
var hasDir, hasFilename, hasVerbose, hasIncludeTestFiles bool
for _, flag := range cmd.Flags {
switch flag.Names()[0] {
case "dir":
@@ -494,10 +749,340 @@ func TestLensCommandFlags(t *testing.T) {
hasFilename = true
case "verbose":
hasVerbose = true
case "include-test-files":
hasIncludeTestFiles = true
}
}
assert.True(t, hasDir, "should have dir flag")
assert.True(t, hasFilename, "should have filename flag")
assert.True(t, hasVerbose, "should have verbose flag")
assert.True(t, hasIncludeTestFiles, "should have include-test-files flag")
}
func TestParseFileWithEmbeddedStruct(t *testing.T) {
// Create a temporary test file
tmpDir := t.TempDir()
testFile := filepath.Join(tmpDir, "test.go")
testCode := `package testpkg
// Base struct to be embedded
type Base struct {
ID int
Name string
}
// fp-go:Lens
type Extended struct {
Base
Extra string
}
`
err := os.WriteFile(testFile, []byte(testCode), 0o644)
require.NoError(t, err)
// Parse the file
structs, pkg, err := parseFile(testFile)
require.NoError(t, err)
// Verify results
assert.Equal(t, "testpkg", pkg)
assert.Len(t, structs, 1)
// Check Extended struct
extended := structs[0]
assert.Equal(t, "Extended", extended.Name)
assert.Len(t, extended.Fields, 3, "Should have 3 fields: ID, Name (from Base), and Extra")
// Check that embedded fields are promoted
fieldNames := make(map[string]bool)
for _, field := range extended.Fields {
fieldNames[field.Name] = true
}
assert.True(t, fieldNames["ID"], "Should have promoted ID field from Base")
assert.True(t, fieldNames["Name"], "Should have promoted Name field from Base")
assert.True(t, fieldNames["Extra"], "Should have Extra field")
}
func TestGenerateLensHelpersWithEmbeddedStruct(t *testing.T) {
// Create a temporary directory with test files
tmpDir := t.TempDir()
testCode := `package testpkg
// Base struct to be embedded
type Address struct {
Street string
City string
}
// fp-go:Lens
type Person struct {
Address
Name string
Age int
}
`
testFile := filepath.Join(tmpDir, "test.go")
err := os.WriteFile(testFile, []byte(testCode), 0o644)
require.NoError(t, err)
// Generate lens code
outputFile := "gen.go"
err = generateLensHelpers(tmpDir, outputFile, false, false)
require.NoError(t, err)
// Verify the generated file exists
genPath := filepath.Join(tmpDir, outputFile)
_, err = os.Stat(genPath)
require.NoError(t, err)
// Read and verify the generated content
content, err := os.ReadFile(genPath)
require.NoError(t, err)
contentStr := string(content)
// Check for expected content
assert.Contains(t, contentStr, "package testpkg")
assert.Contains(t, contentStr, "PersonLenses")
assert.Contains(t, contentStr, "MakePersonLenses")
// Check that embedded fields are included
assert.Contains(t, contentStr, "Street __lens.Lens[Person, string]", "Should have lens for embedded Street field")
assert.Contains(t, contentStr, "City __lens.Lens[Person, string]", "Should have lens for embedded City field")
assert.Contains(t, contentStr, "Name __lens.Lens[Person, string]", "Should have lens for Name field")
assert.Contains(t, contentStr, "Age __lens.Lens[Person, int]", "Should have lens for Age field")
// Check that optional lenses are also generated for embedded fields
assert.Contains(t, contentStr, "StreetO __lens_option.LensO[Person, string]")
assert.Contains(t, contentStr, "CityO __lens_option.LensO[Person, string]")
}
func TestParseFileWithPointerEmbeddedStruct(t *testing.T) {
// Create a temporary test file
tmpDir := t.TempDir()
testFile := filepath.Join(tmpDir, "test.go")
testCode := `package testpkg
// Base struct to be embedded
type Metadata struct {
CreatedAt string
UpdatedAt string
}
// fp-go:Lens
type Document struct {
*Metadata
Title string
Content string
}
`
err := os.WriteFile(testFile, []byte(testCode), 0o644)
require.NoError(t, err)
// Parse the file
structs, pkg, err := parseFile(testFile)
require.NoError(t, err)
// Verify results
assert.Equal(t, "testpkg", pkg)
assert.Len(t, structs, 1)
// Check Document struct
doc := structs[0]
assert.Equal(t, "Document", doc.Name)
assert.Len(t, doc.Fields, 4, "Should have 4 fields: CreatedAt, UpdatedAt (from *Metadata), Title, and Content")
// Check that embedded fields are promoted
fieldNames := make(map[string]bool)
for _, field := range doc.Fields {
fieldNames[field.Name] = true
}
assert.True(t, fieldNames["CreatedAt"], "Should have promoted CreatedAt field from *Metadata")
assert.True(t, fieldNames["UpdatedAt"], "Should have promoted UpdatedAt field from *Metadata")
assert.True(t, fieldNames["Title"], "Should have Title field")
assert.True(t, fieldNames["Content"], "Should have Content field")
}
func TestParseFileWithGenericStruct(t *testing.T) {
// Create a temporary test file
tmpDir := t.TempDir()
testFile := filepath.Join(tmpDir, "test.go")
testCode := `package testpkg
// fp-go:Lens
type Container[T any] struct {
Value T
Count int
}
`
err := os.WriteFile(testFile, []byte(testCode), 0o644)
require.NoError(t, err)
// Parse the file
structs, pkg, err := parseFile(testFile)
require.NoError(t, err)
// Verify results
assert.Equal(t, "testpkg", pkg)
assert.Len(t, structs, 1)
// Check Container struct
container := structs[0]
assert.Equal(t, "Container", container.Name)
assert.Equal(t, "[T any]", container.TypeParams, "Should have type parameter [T any]")
assert.Len(t, container.Fields, 2)
assert.Equal(t, "Value", container.Fields[0].Name)
assert.Equal(t, "T", container.Fields[0].TypeName)
assert.Equal(t, "Count", container.Fields[1].Name)
assert.Equal(t, "int", container.Fields[1].TypeName)
}
func TestParseFileWithMultipleTypeParams(t *testing.T) {
// Create a temporary test file
tmpDir := t.TempDir()
testFile := filepath.Join(tmpDir, "test.go")
testCode := `package testpkg
// fp-go:Lens
type Pair[K comparable, V any] struct {
Key K
Value V
}
`
err := os.WriteFile(testFile, []byte(testCode), 0o644)
require.NoError(t, err)
// Parse the file
structs, pkg, err := parseFile(testFile)
require.NoError(t, err)
// Verify results
assert.Equal(t, "testpkg", pkg)
assert.Len(t, structs, 1)
// Check Pair struct
pair := structs[0]
assert.Equal(t, "Pair", pair.Name)
assert.Equal(t, "[K comparable, V any]", pair.TypeParams, "Should have type parameters [K comparable, V any]")
assert.Len(t, pair.Fields, 2)
assert.Equal(t, "Key", pair.Fields[0].Name)
assert.Equal(t, "K", pair.Fields[0].TypeName)
assert.Equal(t, "Value", pair.Fields[1].Name)
assert.Equal(t, "V", pair.Fields[1].TypeName)
}
func TestGenerateLensHelpersWithGenericStruct(t *testing.T) {
// Create a temporary directory with test files
tmpDir := t.TempDir()
testCode := `package testpkg
// fp-go:Lens
type Box[T any] struct {
Content T
Label string
}
`
testFile := filepath.Join(tmpDir, "test.go")
err := os.WriteFile(testFile, []byte(testCode), 0o644)
require.NoError(t, err)
// Generate lens code
outputFile := "gen.go"
err = generateLensHelpers(tmpDir, outputFile, false, false)
require.NoError(t, err)
// Verify the generated file exists
genPath := filepath.Join(tmpDir, outputFile)
_, err = os.Stat(genPath)
require.NoError(t, err)
// Read and verify the generated content
content, err := os.ReadFile(genPath)
require.NoError(t, err)
contentStr := string(content)
// Check for expected content with type parameters
assert.Contains(t, contentStr, "package testpkg")
assert.Contains(t, contentStr, "type BoxLenses[T any] struct", "Should have generic BoxLenses type")
assert.Contains(t, contentStr, "type BoxRefLenses[T any] struct", "Should have generic BoxRefLenses type")
assert.Contains(t, contentStr, "func MakeBoxLenses[T any]() BoxLenses[T]", "Should have generic constructor")
assert.Contains(t, contentStr, "func MakeBoxRefLenses[T any]() BoxRefLenses[T]", "Should have generic ref constructor")
// Check that fields use the generic type parameter
assert.Contains(t, contentStr, "Content __lens.Lens[Box[T], T]", "Should have lens for generic Content field")
assert.Contains(t, contentStr, "Label __lens.Lens[Box[T], string]", "Should have lens for Label field")
// Check optional lenses - only for comparable types
// T any is not comparable, so ContentO should NOT be generated
assert.NotContains(t, contentStr, "ContentO __lens_option.LensO[Box[T], T]", "T any is not comparable, should not have optional lens")
// string is comparable, so LabelO should be generated
assert.Contains(t, contentStr, "LabelO __lens_option.LensO[Box[T], string]", "string is comparable, should have optional lens")
}
func TestGenerateLensHelpersWithComparableTypeParam(t *testing.T) {
// Create a temporary directory with test files
tmpDir := t.TempDir()
testCode := `package testpkg
// fp-go:Lens
type ComparableBox[T comparable] struct {
Key T
Value string
}
`
testFile := filepath.Join(tmpDir, "test.go")
err := os.WriteFile(testFile, []byte(testCode), 0o644)
require.NoError(t, err)
// Generate lens code
outputFile := "gen.go"
err = generateLensHelpers(tmpDir, outputFile, false, false)
require.NoError(t, err)
// Verify the generated file exists
genPath := filepath.Join(tmpDir, outputFile)
_, err = os.Stat(genPath)
require.NoError(t, err)
// Read and verify the generated content
content, err := os.ReadFile(genPath)
require.NoError(t, err)
contentStr := string(content)
// Check for expected content with type parameters
assert.Contains(t, contentStr, "package testpkg")
assert.Contains(t, contentStr, "type ComparableBoxLenses[T comparable] struct", "Should have generic ComparableBoxLenses type")
assert.Contains(t, contentStr, "type ComparableBoxRefLenses[T comparable] struct", "Should have generic ComparableBoxRefLenses type")
// Check that Key field (with comparable constraint) uses MakeLensStrict in RefLenses
assert.Contains(t, contentStr, "lensKey := __lens.MakeLensStrictWithName(", "Key field with comparable constraint should use MakeLensStrictWithName")
// Check that Value field (string, always comparable) also uses MakeLensStrict
assert.Contains(t, contentStr, "lensValue := __lens.MakeLensStrictWithName(", "Value field (string) should use MakeLensStrictWithName")
// Verify that MakeLensRef is NOT used (since both fields are comparable)
assert.NotContains(t, contentStr, "__lens.MakeLensRefWithName(", "Should not use MakeLensRefWithName when all fields are comparable")
}

View File

@@ -19,6 +19,8 @@ import (
"fmt"
"os"
"strings"
S "github.com/IBM/fp-go/v2/string"
)
// Deprecated:
@@ -176,7 +178,7 @@ func generateTraverseTuple1(
}
fmt.Fprintf(f, "F%d ~func(A%d) %s", j+1, j+1, hkt(fmt.Sprintf("T%d", j+1)))
}
if infix != "" {
if S.IsNonEmpty(infix) {
fmt.Fprintf(f, ", %s", infix)
}
// types
@@ -209,7 +211,7 @@ func generateTraverseTuple1(
fmt.Fprintf(f, " return A.TraverseTuple%d(\n", i)
// map
fmt.Fprintf(f, " Map[")
if infix != "" {
if S.IsNonEmpty(infix) {
fmt.Fprintf(f, "%s, T1,", infix)
} else {
fmt.Fprintf(f, "T1,")
@@ -231,7 +233,7 @@ func generateTraverseTuple1(
fmt.Fprintf(f, " ")
}
fmt.Fprintf(f, "%s", tuple)
if infix != "" {
if S.IsNonEmpty(infix) {
fmt.Fprintf(f, ", %s", infix)
}
fmt.Fprintf(f, ", T%d],\n", j+1)
@@ -256,11 +258,11 @@ func generateSequenceTuple1(
fmt.Fprintf(f, "\n// SequenceTuple%d converts a [Tuple%d] of [%s] into an [%s].\n", i, i, hkt("T"), hkt(fmt.Sprintf("Tuple%d", i)))
fmt.Fprintf(f, "func SequenceTuple%d[", i)
if infix != "" {
if S.IsNonEmpty(infix) {
fmt.Fprintf(f, "%s", infix)
}
for j := 0; j < i; j++ {
if infix != "" || j > 0 {
if S.IsNonEmpty(infix) || j > 0 {
fmt.Fprintf(f, ", ")
}
fmt.Fprintf(f, "T%d", j+1)
@@ -276,7 +278,7 @@ func generateSequenceTuple1(
fmt.Fprintf(f, " return A.SequenceTuple%d(\n", i)
// map
fmt.Fprintf(f, " Map[")
if infix != "" {
if S.IsNonEmpty(infix) {
fmt.Fprintf(f, "%s, T1,", infix)
} else {
fmt.Fprintf(f, "T1,")
@@ -298,7 +300,7 @@ func generateSequenceTuple1(
fmt.Fprintf(f, " ")
}
fmt.Fprintf(f, "%s", tuple)
if infix != "" {
if S.IsNonEmpty(infix) {
fmt.Fprintf(f, ", %s", infix)
}
fmt.Fprintf(f, ", T%d],\n", j+1)
@@ -319,11 +321,11 @@ func generateSequenceT1(
fmt.Fprintf(f, "\n// SequenceT%d converts %d parameters of [%s] into a [%s].\n", i, i, hkt("T"), hkt(fmt.Sprintf("Tuple%d", i)))
fmt.Fprintf(f, "func SequenceT%d[", i)
if infix != "" {
if S.IsNonEmpty(infix) {
fmt.Fprintf(f, "%s", infix)
}
for j := 0; j < i; j++ {
if infix != "" || j > 0 {
if S.IsNonEmpty(infix) || j > 0 {
fmt.Fprintf(f, ", ")
}
fmt.Fprintf(f, "T%d", j+1)
@@ -339,7 +341,7 @@ func generateSequenceT1(
fmt.Fprintf(f, " return A.SequenceT%d(\n", i)
// map
fmt.Fprintf(f, " Map[")
if infix != "" {
if S.IsNonEmpty(infix) {
fmt.Fprintf(f, "%s, T1,", infix)
} else {
fmt.Fprintf(f, "T1,")
@@ -361,7 +363,7 @@ func generateSequenceT1(
fmt.Fprintf(f, " ")
}
fmt.Fprintf(f, "%s", tuple)
if infix != "" {
if S.IsNonEmpty(infix) {
fmt.Fprintf(f, ", %s", infix)
}
fmt.Fprintf(f, ", T%d],\n", j+1)

11
v2/constant/monoid.go Normal file
View File

@@ -0,0 +1,11 @@
package constant
import (
"github.com/IBM/fp-go/v2/function"
M "github.com/IBM/fp-go/v2/monoid"
)
// Monoid returns a [M.Monoid] that returns a constant value in all operations
func Monoid[A any](a A) M.Monoid[A] {
return M.MakeMonoid(function.Constant2[A, A](a), a)
}

177
v2/consumer/consumer.go Normal file
View File

@@ -0,0 +1,177 @@
// Copyright (c) 2023 - 2025 IBM Corp.
// All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package consumer
// Local transforms a Consumer by preprocessing its input through a function.
// This is the contravariant map operation for Consumers, analogous to reader.Local
// but operating on the input side rather than the output side.
//
// Given a Consumer[R1] that consumes values of type R1, and a function f that
// converts R2 to R1, Local creates a new Consumer[R2] that:
// 1. Takes a value of type R2
// 2. Applies f to convert it to R1
// 3. Passes the result to the original Consumer[R1]
//
// This is particularly useful for adapting consumers to work with different input types,
// similar to how reader.Local adapts readers to work with different environment types.
//
// Comparison with reader.Local:
// - reader.Local: Transforms the environment BEFORE passing it to a Reader (preprocessing input)
// - consumer.Local: Transforms the value BEFORE passing it to a Consumer (preprocessing input)
// - Both are contravariant operations on the input type
// - Reader produces output, Consumer performs side effects
//
// Type Parameters:
// - R2: The input type of the new Consumer (what you have)
// - R1: The input type of the original Consumer (what it expects)
//
// Parameters:
// - f: A function that converts R2 to R1 (preprocessing function)
//
// Returns:
// - An Operator that transforms Consumer[R1] into Consumer[R2]
//
// Example - Basic type adaptation:
//
// // Consumer that logs integers
// logInt := func(x int) {
// fmt.Printf("Value: %d\n", x)
// }
//
// // Adapt it to consume strings by parsing them first
// parseToInt := func(s string) int {
// n, _ := strconv.Atoi(s)
// return n
// }
//
// logString := consumer.Local(parseToInt)(logInt)
// logString("42") // Logs: "Value: 42"
//
// Example - Extracting fields from structs:
//
// type User struct {
// Name string
// Age int
// }
//
// // Consumer that logs names
// logName := func(name string) {
// fmt.Printf("Name: %s\n", name)
// }
//
// // Adapt it to consume User structs
// extractName := func(u User) string {
// return u.Name
// }
//
// logUser := consumer.Local(extractName)(logName)
// logUser(User{Name: "Alice", Age: 30}) // Logs: "Name: Alice"
//
// Example - Simplifying complex types:
//
// type DetailedConfig struct {
// Host string
// Port int
// Timeout time.Duration
// MaxRetry int
// }
//
// type SimpleConfig struct {
// Host string
// Port int
// }
//
// // Consumer that logs simple configs
// logSimple := func(c SimpleConfig) {
// fmt.Printf("Server: %s:%d\n", c.Host, c.Port)
// }
//
// // Adapt it to consume detailed configs
// simplify := func(d DetailedConfig) SimpleConfig {
// return SimpleConfig{Host: d.Host, Port: d.Port}
// }
//
// logDetailed := consumer.Local(simplify)(logSimple)
// logDetailed(DetailedConfig{
// Host: "localhost",
// Port: 8080,
// Timeout: time.Second,
// MaxRetry: 3,
// }) // Logs: "Server: localhost:8080"
//
// Example - Composing multiple transformations:
//
// type Response struct {
// StatusCode int
// Body string
// }
//
// // Consumer that logs status codes
// logStatus := func(code int) {
// fmt.Printf("Status: %d\n", code)
// }
//
// // Extract status code from response
// getStatus := func(r Response) int {
// return r.StatusCode
// }
//
// // Adapt to consume responses
// logResponse := consumer.Local(getStatus)(logStatus)
// logResponse(Response{StatusCode: 200, Body: "OK"}) // Logs: "Status: 200"
//
// Example - Using with multiple consumers:
//
// type Event struct {
// Type string
// Timestamp time.Time
// Data map[string]any
// }
//
// // Consumers for different aspects
// logType := func(t string) { fmt.Printf("Type: %s\n", t) }
// logTime := func(t time.Time) { fmt.Printf("Time: %v\n", t) }
//
// // Adapt them to consume events
// logEventType := consumer.Local(func(e Event) string { return e.Type })(logType)
// logEventTime := consumer.Local(func(e Event) time.Time { return e.Timestamp })(logTime)
//
// event := Event{Type: "UserLogin", Timestamp: time.Now(), Data: nil}
// logEventType(event) // Logs: "Type: UserLogin"
// logEventTime(event) // Logs: "Time: ..."
//
// Use Cases:
// - Type adaptation: Convert between different input types
// - Field extraction: Extract specific fields from complex structures
// - Data transformation: Preprocess data before consumption
// - Interface adaptation: Adapt consumers to work with different interfaces
// - Logging pipelines: Transform data before logging
// - Event handling: Extract relevant data from events before processing
//
// Relationship to Reader:
// Consumer is the dual of Reader in category theory:
// - Reader[R, A] = R -> A (produces output from environment)
// - Consumer[A] = A -> () (consumes input, produces side effects)
// - reader.Local transforms the environment before reading
// - consumer.Local transforms the input before consuming
// - Both are contravariant functors on their input type
func Local[R2, R1 any](f func(R2) R1) Operator[R1, R2] {
return func(c Consumer[R1]) Consumer[R2] {
return func(r2 R2) {
c(f(r2))
}
}
}

View File

@@ -0,0 +1,383 @@
// Copyright (c) 2023 - 2025 IBM Corp.
// All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package consumer
import (
"strconv"
"testing"
"time"
"github.com/IBM/fp-go/v2/function"
"github.com/stretchr/testify/assert"
)
func TestLocal(t *testing.T) {
t.Run("basic type transformation", func(t *testing.T) {
var captured int
consumeInt := func(x int) {
captured = x
}
// Transform string to int before consuming
stringToInt := func(s string) int {
n, _ := strconv.Atoi(s)
return n
}
consumeString := Local(stringToInt)(consumeInt)
consumeString("42")
assert.Equal(t, 42, captured)
})
t.Run("field extraction from struct", func(t *testing.T) {
type User struct {
Name string
Age int
}
var capturedName string
consumeName := func(name string) {
capturedName = name
}
extractName := func(u User) string {
return u.Name
}
consumeUser := Local(extractName)(consumeName)
consumeUser(User{Name: "Alice", Age: 30})
assert.Equal(t, "Alice", capturedName)
})
t.Run("simplifying complex types", func(t *testing.T) {
type DetailedConfig struct {
Host string
Port int
Timeout time.Duration
MaxRetry int
}
type SimpleConfig struct {
Host string
Port int
}
var captured SimpleConfig
consumeSimple := func(c SimpleConfig) {
captured = c
}
simplify := func(d DetailedConfig) SimpleConfig {
return SimpleConfig{Host: d.Host, Port: d.Port}
}
consumeDetailed := Local(simplify)(consumeSimple)
consumeDetailed(DetailedConfig{
Host: "localhost",
Port: 8080,
Timeout: time.Second,
MaxRetry: 3,
})
assert.Equal(t, SimpleConfig{Host: "localhost", Port: 8080}, captured)
})
t.Run("multiple transformations", func(t *testing.T) {
type Response struct {
StatusCode int
Body string
}
var capturedStatus int
consumeStatus := func(code int) {
capturedStatus = code
}
getStatus := func(r Response) int {
return r.StatusCode
}
consumeResponse := Local(getStatus)(consumeStatus)
consumeResponse(Response{StatusCode: 200, Body: "OK"})
assert.Equal(t, 200, capturedStatus)
})
t.Run("chaining Local transformations", func(t *testing.T) {
type Level3 struct{ Value int }
type Level2 struct{ L3 Level3 }
type Level1 struct{ L2 Level2 }
var captured int
consumeInt := func(x int) {
captured = x
}
// Chain multiple Local transformations
extract3 := func(l3 Level3) int { return l3.Value }
extract2 := func(l2 Level2) Level3 { return l2.L3 }
extract1 := func(l1 Level1) Level2 { return l1.L2 }
// Compose the transformations
consumeLevel3 := Local(extract3)(consumeInt)
consumeLevel2 := Local(extract2)(consumeLevel3)
consumeLevel1 := Local(extract1)(consumeLevel2)
consumeLevel1(Level1{L2: Level2{L3: Level3{Value: 42}}})
assert.Equal(t, 42, captured)
})
t.Run("identity transformation", func(t *testing.T) {
var captured string
consumeString := func(s string) {
captured = s
}
identity := function.Identity[string]
consumeIdentity := Local(identity)(consumeString)
consumeIdentity("test")
assert.Equal(t, "test", captured)
})
t.Run("transformation with calculation", func(t *testing.T) {
type Rectangle struct {
Width int
Height int
}
var capturedArea int
consumeArea := func(area int) {
capturedArea = area
}
calculateArea := func(r Rectangle) int {
return r.Width * r.Height
}
consumeRectangle := Local(calculateArea)(consumeArea)
consumeRectangle(Rectangle{Width: 5, Height: 10})
assert.Equal(t, 50, capturedArea)
})
t.Run("multiple consumers with same transformation", func(t *testing.T) {
type Event struct {
Type string
Timestamp time.Time
}
var capturedType string
var capturedTime time.Time
consumeType := func(t string) {
capturedType = t
}
consumeTime := func(t time.Time) {
capturedTime = t
}
extractType := func(e Event) string { return e.Type }
extractTime := func(e Event) time.Time { return e.Timestamp }
consumeEventType := Local(extractType)(consumeType)
consumeEventTime := Local(extractTime)(consumeTime)
now := time.Now()
event := Event{Type: "UserLogin", Timestamp: now}
consumeEventType(event)
consumeEventTime(event)
assert.Equal(t, "UserLogin", capturedType)
assert.Equal(t, now, capturedTime)
})
t.Run("transformation with slice", func(t *testing.T) {
var captured int
consumeLength := func(n int) {
captured = n
}
getLength := func(s []string) int {
return len(s)
}
consumeSlice := Local(getLength)(consumeLength)
consumeSlice([]string{"a", "b", "c"})
assert.Equal(t, 3, captured)
})
t.Run("transformation with map", func(t *testing.T) {
var captured int
consumeCount := func(n int) {
captured = n
}
getCount := func(m map[string]int) int {
return len(m)
}
consumeMap := Local(getCount)(consumeCount)
consumeMap(map[string]int{"a": 1, "b": 2, "c": 3})
assert.Equal(t, 3, captured)
})
t.Run("transformation with pointer", func(t *testing.T) {
var captured int
consumeInt := func(x int) {
captured = x
}
dereference := func(p *int) int {
if p == nil {
return 0
}
return *p
}
consumePointer := Local(dereference)(consumeInt)
value := 42
consumePointer(&value)
assert.Equal(t, 42, captured)
consumePointer(nil)
assert.Equal(t, 0, captured)
})
t.Run("transformation with custom type", func(t *testing.T) {
type MyType struct {
Value string
}
var captured string
consumeString := func(s string) {
captured = s
}
extractValue := func(m MyType) string {
return m.Value
}
consumeMyType := Local(extractValue)(consumeString)
consumeMyType(MyType{Value: "test"})
assert.Equal(t, "test", captured)
})
t.Run("accumulation through multiple calls", func(t *testing.T) {
var sum int
accumulate := func(x int) {
sum += x
}
double := func(x int) int {
return x * 2
}
accumulateDoubled := Local(double)(accumulate)
accumulateDoubled(1)
accumulateDoubled(2)
accumulateDoubled(3)
assert.Equal(t, 12, sum) // (1*2) + (2*2) + (3*2) = 2 + 4 + 6 = 12
})
t.Run("transformation with error handling", func(t *testing.T) {
type Result struct {
Value int
Error error
}
var captured int
consumeInt := func(x int) {
captured = x
}
extractValue := func(r Result) int {
if r.Error != nil {
return -1
}
return r.Value
}
consumeResult := Local(extractValue)(consumeInt)
consumeResult(Result{Value: 42, Error: nil})
assert.Equal(t, 42, captured)
consumeResult(Result{Value: 100, Error: assert.AnError})
assert.Equal(t, -1, captured)
})
t.Run("transformation preserves consumer behavior", func(t *testing.T) {
callCount := 0
consumer := func(x int) {
callCount++
}
transform := func(s string) int {
n, _ := strconv.Atoi(s)
return n
}
transformedConsumer := Local(transform)(consumer)
transformedConsumer("1")
transformedConsumer("2")
transformedConsumer("3")
assert.Equal(t, 3, callCount)
})
t.Run("comparison with reader.Local behavior", func(t *testing.T) {
// This test demonstrates the dual nature of Consumer and Reader
// Consumer: transforms input before consumption (contravariant)
// Reader: transforms environment before reading (also contravariant on input)
type DetailedEnv struct {
Value int
Extra string
}
type SimpleEnv struct {
Value int
}
var captured int
consumeSimple := func(e SimpleEnv) {
captured = e.Value
}
simplify := func(d DetailedEnv) SimpleEnv {
return SimpleEnv{Value: d.Value}
}
consumeDetailed := Local(simplify)(consumeSimple)
consumeDetailed(DetailedEnv{Value: 42, Extra: "ignored"})
assert.Equal(t, 42, captured)
})
}

58
v2/consumer/types.go Normal file
View File

@@ -0,0 +1,58 @@
// Copyright (c) 2023 - 2025 IBM Corp.
// All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package consumer provides types and utilities for functions that consume values without returning results.
//
// A Consumer represents a side-effecting operation that accepts a value but produces no output.
// This is useful for operations like logging, printing, updating state, or any action where
// the return value is not needed.
package consumer
type (
// Consumer represents a function that accepts a value of type A and performs a side effect.
// It does not return any value, making it useful for operations where only the side effect matters,
// such as logging, printing, or updating external state.
//
// This is a fundamental concept in functional programming for handling side effects in a
// controlled manner. Consumers can be composed, chained, or used in higher-order functions
// to build complex side-effecting behaviors.
//
// Type Parameters:
// - A: The type of value consumed by the function
//
// Example:
//
// // A simple consumer that prints values
// var printInt Consumer[int] = func(x int) {
// fmt.Println(x)
// }
// printInt(42) // Prints: 42
//
// // A consumer that logs messages
// var logger Consumer[string] = func(msg string) {
// log.Println(msg)
// }
// logger("Hello, World!") // Logs: Hello, World!
//
// // Consumers can be used in functional pipelines
// var saveToDatabase Consumer[User] = func(user User) {
// db.Save(user)
// }
Consumer[A any] = func(A)
// Operator represents a function that transforms a Consumer[A] into a Consumer[B].
// This is useful for composing and adapting consumers to work with different types.
Operator[A, B any] = func(Consumer[A]) Consumer[B]
)

View File

@@ -21,11 +21,38 @@ import (
"github.com/IBM/fp-go/v2/result"
)
// withContext wraps an existing IOEither and performs a context check for cancellation before delegating
// WithContext wraps an IOResult and performs a context check for cancellation before executing.
// This ensures that if the context is already cancelled, the computation short-circuits immediately
// without executing the wrapped computation.
//
// This is useful for adding cancellation awareness to computations that might not check the context themselves.
//
// Type Parameters:
// - A: The type of the success value
//
// Parameters:
// - ctx: The context to check for cancellation
// - ma: The IOResult to wrap with context checking
//
// Returns:
// - An IOResult that checks for cancellation before executing
//
// Example:
//
// computation := func() Result[string] {
// // Long-running operation
// return result.Of("done")
// }
//
// ctx, cancel := context.WithCancel(context.Background())
// cancel() // Cancel immediately
//
// wrapped := WithContext(ctx, computation)
// result := wrapped() // Returns Left with context.Canceled error
func WithContext[A any](ctx context.Context, ma IOResult[A]) IOResult[A] {
return func() Result[A] {
if err := context.Cause(ctx); err != nil {
return result.Left[A](err)
if ctx.Err() != nil {
return result.Left[A](context.Cause(ctx))
}
return ma()
}

View File

@@ -6,6 +6,11 @@ import (
)
type (
// IOResult represents a synchronous computation that may fail with an error.
// It's an alias for ioresult.IOResult[T].
IOResult[T any] = ioresult.IOResult[T]
Result[T any] = result.Result[T]
// Result represents a computation that may fail with an error.
// It's an alias for result.Result[T].
Result[T any] = result.Result[T]
)

View File

@@ -0,0 +1,75 @@
package readerio
import (
RIO "github.com/IBM/fp-go/v2/readerio"
)
// Bracket ensures that a resource is properly acquired, used, and released, even if an error occurs.
// This implements the bracket pattern for safe resource management with [ReaderIO].
//
// The bracket pattern guarantees that:
// - The acquire action is executed first to obtain the resource
// - The use function is called with the acquired resource
// - The release function is always called with the resource and result, regardless of success or failure
// - The final result from the use function is returned
//
// This is particularly useful for managing resources like file handles, database connections,
// or locks that must be cleaned up properly.
//
// Type Parameters:
// - A: The type of the acquired resource
// - B: The type of the result produced by the use function
// - ANY: The type returned by the release function (typically ignored)
//
// Parameters:
// - acquire: A ReaderIO that acquires the resource
// - use: A Kleisli arrow that uses the resource and produces a result
// - release: A function that releases the resource, receiving both the resource and the result
//
// Returns:
// - A ReaderIO[B] that safely manages the resource lifecycle
//
// Example:
//
// // Acquire a file handle
// acquireFile := func(ctx context.Context) IO[*os.File] {
// return func() *os.File {
// f, _ := os.Open("data.txt")
// return f
// }
// }
//
// // Use the file
// readFile := func(f *os.File) ReaderIO[string] {
// return func(ctx context.Context) IO[string] {
// return func() string {
// data, _ := io.ReadAll(f)
// return string(data)
// }
// }
// }
//
// // Release the file
// closeFile := func(f *os.File, result string) ReaderIO[any] {
// return func(ctx context.Context) IO[any] {
// return func() any {
// f.Close()
// return nil
// }
// }
// }
//
// // Safely read file with automatic cleanup
// safeRead := Bracket(acquireFile, readFile, closeFile)
// result := safeRead(context.Background())()
//
//go:inline
func Bracket[
A, B, ANY any](
acquire ReaderIO[A],
use Kleisli[A, B],
release func(A, B) ReaderIO[ANY],
) ReaderIO[B] {
return RIO.Bracket(acquire, use, release)
}

View File

@@ -0,0 +1,65 @@
package readerio
import "github.com/IBM/fp-go/v2/io"
// ChainConsumer chains a consumer function into a ReaderIO computation, discarding the original value.
// This is useful for performing side effects (like logging or metrics) that consume a value
// but don't produce a meaningful result.
//
// The consumer is executed for its side effects, and the computation returns an empty struct.
//
// Type Parameters:
// - A: The type of value to consume
//
// Parameters:
// - c: A consumer function that performs side effects on the value
//
// Returns:
// - An Operator that chains the consumer and returns struct{}
//
// Example:
//
// logUser := func(u User) {
// log.Printf("Processing user: %s", u.Name)
// }
//
// pipeline := F.Pipe2(
// fetchUser(123),
// ChainConsumer(logUser),
// )
//
//go:inline
func ChainConsumer[A any](c Consumer[A]) Operator[A, Void] {
return ChainIOK(io.FromConsumer(c))
}
// ChainFirstConsumer chains a consumer function into a ReaderIO computation, preserving the original value.
// This is useful for performing side effects (like logging or metrics) while passing the value through unchanged.
//
// The consumer is executed for its side effects, but the original value is returned.
//
// Type Parameters:
// - A: The type of value to consume and return
//
// Parameters:
// - c: A consumer function that performs side effects on the value
//
// Returns:
// - An Operator that chains the consumer and returns the original value
//
// Example:
//
// logUser := func(u User) {
// log.Printf("User: %s", u.Name)
// }
//
// pipeline := F.Pipe3(
// fetchUser(123),
// ChainFirstConsumer(logUser), // Logs but passes user through
// Map(func(u User) string { return u.Email }),
// )
//
//go:inline
func ChainFirstConsumer[A any](c Consumer[A]) Operator[A, A] {
return ChainFirstIOK(io.FromConsumer(c))
}

117
v2/context/readerio/flip.go Normal file
View File

@@ -0,0 +1,117 @@
package readerio
import (
"context"
"github.com/IBM/fp-go/v2/reader"
RIO "github.com/IBM/fp-go/v2/readerio"
)
// SequenceReader transforms a ReaderIO containing a Reader into a Reader containing a ReaderIO.
// This "flips" the nested structure, allowing you to provide the Reader's environment first,
// then get a ReaderIO that can be executed with a context.
//
// Type transformation:
//
// From: ReaderIO[Reader[R, A]]
// = func(context.Context) func() func(R) A
//
// To: Reader[R, ReaderIO[A]]
// = func(R) func(context.Context) func() A
//
// This is useful for point-free style programming where you want to partially apply
// the Reader's environment before dealing with the context.
//
// Type Parameters:
// - R: The environment type that the Reader depends on
// - A: The value type
//
// Parameters:
// - ma: A ReaderIO containing a Reader
//
// Returns:
// - A Reader that produces a ReaderIO when given an environment
//
// Example:
//
// type Config struct {
// Timeout int
// }
//
// // A computation that produces a Reader
// getMultiplier := func(ctx context.Context) IO[func(Config) int] {
// return func() func(Config) int {
// return func(cfg Config) int {
// return cfg.Timeout * 2
// }
// }
// }
//
// // Sequence it to apply Config first
// sequenced := SequenceReader[Config, int](getMultiplier)
// cfg := Config{Timeout: 30}
// result := sequenced(cfg)(context.Background())() // Returns 60
//
//go:inline
func SequenceReader[R, A any](ma ReaderIO[Reader[R, A]]) Reader[R, ReaderIO[A]] {
return RIO.SequenceReader(ma)
}
// TraverseReader applies a Reader-based transformation to a ReaderIO, introducing a new environment dependency.
//
// This function takes a Reader-based Kleisli arrow and returns a function that can transform
// a ReaderIO. The result allows you to provide the Reader's environment (R) first, which then
// produces a ReaderIO that depends on the context.
//
// Type transformation:
//
// From: ReaderIO[A]
// = func(context.Context) func() A
//
// With: reader.Kleisli[R, A, B]
// = func(A) func(R) B
//
// To: func(ReaderIO[A]) func(R) ReaderIO[B]
// = func(ReaderIO[A]) func(R) func(context.Context) func() B
//
// This enables transforming values within a ReaderIO using environment-dependent logic.
//
// Type Parameters:
// - R: The environment type that the Reader depends on
// - A: The input value type
// - B: The output value type
//
// Parameters:
// - f: A Reader-based Kleisli arrow that transforms A to B using environment R
//
// Returns:
// - A function that takes a ReaderIO[A] and returns a function from R to ReaderIO[B]
//
// Example:
//
// type Config struct {
// Multiplier int
// }
//
// // A Reader-based transformation
// multiply := func(x int) func(Config) int {
// return func(cfg Config) int {
// return x * cfg.Multiplier
// }
// }
//
// // Apply TraverseReader
// traversed := TraverseReader[Config, int, int](multiply)
// computation := Of(10)
// result := traversed(computation)
//
// // Provide Config to get final result
// cfg := Config{Multiplier: 5}
// finalResult := result(cfg)(context.Background())() // Returns 50
//
//go:inline
func TraverseReader[R, A, B any](
f reader.Kleisli[R, A, B],
) func(ReaderIO[A]) Kleisli[R, B] {
return RIO.TraverseReader[context.Context](f)
}

View File

@@ -0,0 +1,91 @@
package readerio
import (
"context"
"log/slog"
"github.com/IBM/fp-go/v2/logging"
)
// SLogWithCallback creates a Kleisli arrow that logs a value with a custom logger and log level.
// The value is logged and then passed through unchanged, making this useful for debugging
// and monitoring values as they flow through a ReaderIO computation.
//
// Type Parameters:
// - A: The type of value to log and pass through
//
// Parameters:
// - logLevel: The slog.Level to use for logging (e.g., slog.LevelInfo, slog.LevelDebug)
// - cb: Callback function to retrieve the *slog.Logger from the context
// - message: A descriptive message to include in the log entry
//
// Returns:
// - A Kleisli arrow that logs the value and returns it unchanged
//
// Example:
//
// getMyLogger := func(ctx context.Context) *slog.Logger {
// if logger := ctx.Value("logger"); logger != nil {
// return logger.(*slog.Logger)
// }
// return slog.Default()
// }
//
// debugLog := SLogWithCallback[User](
// slog.LevelDebug,
// getMyLogger,
// "Processing user",
// )
//
// pipeline := F.Pipe2(
// fetchUser(123),
// Chain(debugLog),
// )
func SLogWithCallback[A any](
logLevel slog.Level,
cb func(context.Context) *slog.Logger,
message string) Kleisli[A, A] {
return func(a A) ReaderIO[A] {
return func(ctx context.Context) IO[A] {
// logger
logger := cb(ctx)
return func() A {
logger.LogAttrs(ctx, logLevel, message, slog.Any("value", a))
return a
}
}
}
}
// SLog creates a Kleisli arrow that logs a value at Info level and passes it through unchanged.
// This is a convenience wrapper around SLogWithCallback with standard settings.
//
// The value is logged with the provided message and then returned unchanged, making this
// useful for debugging and monitoring values in a ReaderIO computation pipeline.
//
// Type Parameters:
// - A: The type of value to log and pass through
//
// Parameters:
// - message: A descriptive message to include in the log entry
//
// Returns:
// - A Kleisli arrow that logs the value at Info level and returns it unchanged
//
// Example:
//
// pipeline := F.Pipe3(
// fetchUser(123),
// Chain(SLog[User]("Fetched user")),
// Map(func(u User) string { return u.Name }),
// Chain(SLog[string]("Extracted name")),
// )
//
// result := pipeline(context.Background())()
// // Logs: "Fetched user" value={ID:123 Name:"Alice"}
// // Logs: "Extracted name" value="Alice"
//
//go:inline
func SLog[A any](message string) Kleisli[A, A] {
return SLogWithCallback[A](slog.LevelInfo, logging.GetLoggerFromContext, message)
}

View File

@@ -0,0 +1,769 @@
// Copyright (c) 2023 - 2025 IBM Corp.
// All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package readerio
import (
"context"
"time"
"github.com/IBM/fp-go/v2/function"
"github.com/IBM/fp-go/v2/reader"
RIO "github.com/IBM/fp-go/v2/readerio"
)
const (
// useParallel is the feature flag to control if we use the parallel or the sequential implementation of ap
useParallel = true
)
// MonadMap transforms the success value of a [ReaderIO] using the provided function.
// If the computation fails, the error is propagated unchanged.
//
// Parameters:
// - fa: The ReaderIO to transform
// - f: The transformation function
//
// Returns a new ReaderIO with the transformed value.
//
//go:inline
func MonadMap[A, B any](fa ReaderIO[A], f func(A) B) ReaderIO[B] {
return RIO.MonadMap(fa, f)
}
// Map transforms the success value of a [ReaderIO] using the provided function.
// This is the curried version of [MonadMap], useful for composition.
//
// Parameters:
// - f: The transformation function
//
// Returns a function that transforms a ReaderIO.
//
//go:inline
func Map[A, B any](f func(A) B) Operator[A, B] {
return RIO.Map[context.Context](f)
}
// MonadMapTo replaces the success value of a [ReaderIO] with a constant value.
// If the computation fails, the error is propagated unchanged.
//
// Parameters:
// - fa: The ReaderIO to transform
// - b: The constant value to use
//
// Returns a new ReaderIO with the constant value.
//
//go:inline
func MonadMapTo[A, B any](fa ReaderIO[A], b B) ReaderIO[B] {
return RIO.MonadMapTo(fa, b)
}
// MapTo replaces the success value of a [ReaderIO] with a constant value.
// This is the curried version of [MonadMapTo].
//
// Parameters:
// - b: The constant value to use
//
// Returns a function that transforms a ReaderIO.
//
//go:inline
func MapTo[A, B any](b B) Operator[A, B] {
return RIO.MapTo[context.Context, A](b)
}
// MonadChain sequences two [ReaderIO] computations, where the second depends on the result of the first.
// If the first computation fails, the second is not executed.
//
// Parameters:
// - ma: The first ReaderIO
// - f: Function that produces the second ReaderIO based on the first's result
//
// Returns a new ReaderIO representing the sequenced computation.
//
//go:inline
func MonadChain[A, B any](ma ReaderIO[A], f Kleisli[A, B]) ReaderIO[B] {
return RIO.MonadChain(ma, f)
}
// Chain sequences two [ReaderIO] computations, where the second depends on the result of the first.
// This is the curried version of [MonadChain], useful for composition.
//
// Parameters:
// - f: Function that produces the second ReaderIO based on the first's result
//
// Returns a function that sequences ReaderIO computations.
//
//go:inline
func Chain[A, B any](f Kleisli[A, B]) Operator[A, B] {
return RIO.Chain(f)
}
// MonadChainFirst sequences two [ReaderIO] computations but returns the result of the first.
// The second computation is executed for its side effects only.
//
// Parameters:
// - ma: The first ReaderIO
// - f: Function that produces the second ReaderIO
//
// Returns a ReaderIO with the result of the first computation.
//
//go:inline
func MonadChainFirst[A, B any](ma ReaderIO[A], f Kleisli[A, B]) ReaderIO[A] {
return RIO.MonadChainFirst(ma, f)
}
// MonadTap executes a side-effect computation but returns the original value.
// This is an alias for [MonadChainFirst] and is useful for operations like logging
// or validation that should not affect the main computation flow.
//
// Parameters:
// - ma: The ReaderIO to tap
// - f: Function that produces a side-effect ReaderIO
//
// Returns a ReaderIO with the original value after executing the side effect.
//
//go:inline
func MonadTap[A, B any](ma ReaderIO[A], f Kleisli[A, B]) ReaderIO[A] {
return RIO.MonadTap(ma, f)
}
// ChainFirst sequences two [ReaderIO] computations but returns the result of the first.
// This is the curried version of [MonadChainFirst].
//
// Parameters:
// - f: Function that produces the second ReaderIO
//
// Returns a function that sequences ReaderIO computations.
//
//go:inline
func ChainFirst[A, B any](f Kleisli[A, B]) Operator[A, A] {
return RIO.ChainFirst(f)
}
// Tap executes a side-effect computation but returns the original value.
// This is the curried version of [MonadTap], an alias for [ChainFirst].
//
// Parameters:
// - f: Function that produces a side-effect ReaderIO
//
// Returns a function that taps ReaderIO computations.
//
//go:inline
func Tap[A, B any](f Kleisli[A, B]) Operator[A, A] {
return RIO.Tap(f)
}
// Of creates a [ReaderIO] that always succeeds with the given value.
// This is the same as [Right] and represents the monadic return operation.
//
// Parameters:
// - a: The value to wrap
//
// Returns a ReaderIO that always succeeds with the given value.
//
//go:inline
func Of[A any](a A) ReaderIO[A] {
return RIO.Of[context.Context](a)
}
// MonadApPar implements parallel applicative application for [ReaderIO].
// It executes the function and value computations in parallel where possible,
// potentially improving performance for independent operations.
//
// Parameters:
// - fab: ReaderIO containing a function
// - fa: ReaderIO containing a value
//
// Returns a ReaderIO with the function applied to the value.
//
//go:inline
func MonadApPar[B, A any](fab ReaderIO[func(A) B], fa ReaderIO[A]) ReaderIO[B] {
return RIO.MonadApPar(fab, fa)
}
// MonadAp implements applicative application for [ReaderIO].
// By default, it uses parallel execution ([MonadApPar]) but can be configured to use
// sequential execution ([MonadApSeq]) via the useParallel constant.
//
// Parameters:
// - fab: ReaderIO containing a function
// - fa: ReaderIO containing a value
//
// Returns a ReaderIO with the function applied to the value.
//
//go:inline
func MonadAp[B, A any](fab ReaderIO[func(A) B], fa ReaderIO[A]) ReaderIO[B] {
// dispatch to the configured version
if useParallel {
return MonadApPar(fab, fa)
}
return MonadApSeq(fab, fa)
}
// MonadApSeq implements sequential applicative application for [ReaderIO].
// It executes the function computation first, then the value computation.
//
// Parameters:
// - fab: ReaderIO containing a function
// - fa: ReaderIO containing a value
//
// Returns a ReaderIO with the function applied to the value.
//
//go:inline
func MonadApSeq[B, A any](fab ReaderIO[func(A) B], fa ReaderIO[A]) ReaderIO[B] {
return RIO.MonadApSeq(fab, fa)
}
// Ap applies a function wrapped in a [ReaderIO] to a value wrapped in a ReaderIO.
// This is the curried version of [MonadAp], using the default execution mode.
//
// Parameters:
// - fa: ReaderIO containing a value
//
// Returns a function that applies a ReaderIO function to the value.
//
//go:inline
func Ap[B, A any](fa ReaderIO[A]) Operator[func(A) B, B] {
return RIO.Ap[B](fa)
}
// ApSeq applies a function wrapped in a [ReaderIO] to a value sequentially.
// This is the curried version of [MonadApSeq].
//
// Parameters:
// - fa: ReaderIO containing a value
//
// Returns a function that applies a ReaderIO function to the value sequentially.
//
//go:inline
func ApSeq[B, A any](fa ReaderIO[A]) Operator[func(A) B, B] {
return function.Bind2nd(MonadApSeq[B, A], fa)
}
// ApPar applies a function wrapped in a [ReaderIO] to a value in parallel.
// This is the curried version of [MonadApPar].
//
// Parameters:
// - fa: ReaderIO containing a value
//
// Returns a function that applies a ReaderIO function to the value in parallel.
//
//go:inline
func ApPar[B, A any](fa ReaderIO[A]) Operator[func(A) B, B] {
return function.Bind2nd(MonadApPar[B, A], fa)
}
// Ask returns a [ReaderIO] that provides access to the context.
// This is useful for accessing the [context.Context] within a computation.
//
// Returns a ReaderIO that produces the context.
//
//go:inline
func Ask() ReaderIO[context.Context] {
return RIO.Ask[context.Context]()
}
// FromIO converts an [IO] into a [ReaderIO].
// The IO computation always succeeds, so it's wrapped in Right.
//
// Parameters:
// - t: The IO to convert
//
// Returns a ReaderIO that executes the IO and wraps the result in Right.
//
//go:inline
func FromIO[A any](t IO[A]) ReaderIO[A] {
return RIO.FromIO[context.Context](t)
}
// FromReader converts a [Reader] into a [ReaderIO].
// The Reader computation is lifted into the IO context, allowing it to be
// composed with other ReaderIO operations.
//
// Parameters:
// - t: The Reader to convert
//
// Returns a ReaderIO that executes the Reader and wraps the result in IO.
//
//go:inline
func FromReader[A any](t Reader[context.Context, A]) ReaderIO[A] {
return RIO.FromReader(t)
}
// FromLazy converts a [Lazy] computation into a [ReaderIO].
// The Lazy computation always succeeds, so it's wrapped in Right.
// This is an alias for [FromIO] since Lazy and IO have the same structure.
//
// Parameters:
// - t: The Lazy computation to convert
//
// Returns a ReaderIO that executes the Lazy computation and wraps the result in Right.
//
//go:inline
func FromLazy[A any](t Lazy[A]) ReaderIO[A] {
return RIO.FromIO[context.Context](t)
}
// MonadChainIOK chains a function that returns an [IO] into a [ReaderIO] computation.
// The IO computation always succeeds, so it's wrapped in Right.
//
// Parameters:
// - ma: The ReaderIO to chain from
// - f: Function that produces an IO
//
// Returns a new ReaderIO with the chained IO computation.
//
//go:inline
func MonadChainIOK[A, B any](ma ReaderIO[A], f func(A) IO[B]) ReaderIO[B] {
return RIO.MonadChainIOK(ma, f)
}
// ChainIOK chains a function that returns an [IO] into a [ReaderIO] computation.
// This is the curried version of [MonadChainIOK].
//
// Parameters:
// - f: Function that produces an IO
//
// Returns a function that chains the IO-returning function.
//
//go:inline
func ChainIOK[A, B any](f func(A) IO[B]) Operator[A, B] {
return RIO.ChainIOK[context.Context](f)
}
// MonadChainFirstIOK chains a function that returns an [IO] but keeps the original value.
// The IO computation is executed for its side effects only.
//
// Parameters:
// - ma: The ReaderIO to chain from
// - f: Function that produces an IO
//
// Returns a ReaderIO with the original value after executing the IO.
//
//go:inline
func MonadChainFirstIOK[A, B any](ma ReaderIO[A], f func(A) IO[B]) ReaderIO[A] {
return RIO.MonadChainFirstIOK(ma, f)
}
// MonadTapIOK chains a function that returns an [IO] but keeps the original value.
// This is an alias for [MonadChainFirstIOK] and is useful for side effects like logging.
//
// Parameters:
// - ma: The ReaderIO to tap
// - f: Function that produces an IO for side effects
//
// Returns a ReaderIO with the original value after executing the IO.
//
//go:inline
func MonadTapIOK[A, B any](ma ReaderIO[A], f func(A) IO[B]) ReaderIO[A] {
return RIO.MonadTapIOK(ma, f)
}
// ChainFirstIOK chains a function that returns an [IO] but keeps the original value.
// This is the curried version of [MonadChainFirstIOK].
//
// Parameters:
// - f: Function that produces an IO
//
// Returns a function that chains the IO-returning function.
//
//go:inline
func ChainFirstIOK[A, B any](f func(A) IO[B]) Operator[A, A] {
return RIO.ChainFirstIOK[context.Context](f)
}
// TapIOK chains a function that returns an [IO] but keeps the original value.
// This is the curried version of [MonadTapIOK], an alias for [ChainFirstIOK].
//
// Parameters:
// - f: Function that produces an IO for side effects
//
// Returns a function that taps with IO-returning functions.
//
//go:inline
func TapIOK[A, B any](f func(A) IO[B]) Operator[A, A] {
return RIO.TapIOK[context.Context](f)
}
// Defer creates a [ReaderIO] by lazily generating a new computation each time it's executed.
// This is useful for creating computations that should be re-evaluated on each execution.
//
// Parameters:
// - gen: Lazy generator function that produces a ReaderIO
//
// Returns a ReaderIO that generates a fresh computation on each execution.
//
//go:inline
func Defer[A any](gen Lazy[ReaderIO[A]]) ReaderIO[A] {
return RIO.Defer(gen)
}
// Memoize computes the value of the provided [ReaderIO] monad lazily but exactly once.
// The context used to compute the value is the context of the first call, so do not use this
// method if the value has a functional dependency on the content of the context.
//
// Parameters:
// - rdr: The ReaderIO to memoize
//
// Returns a ReaderIO that caches its result after the first execution.
//
//go:inline
func Memoize[A any](rdr ReaderIO[A]) ReaderIO[A] {
return RIO.Memoize(rdr)
}
// Flatten converts a nested [ReaderIO] into a flat [ReaderIO].
// This is equivalent to [MonadChain] with the identity function.
//
// Parameters:
// - rdr: The nested ReaderIO to flatten
//
// Returns a flattened ReaderIO.
//
//go:inline
func Flatten[A any](rdr ReaderIO[ReaderIO[A]]) ReaderIO[A] {
return RIO.Flatten(rdr)
}
// MonadFlap applies a value to a function wrapped in a [ReaderIO].
// This is the reverse of [MonadAp], useful in certain composition scenarios.
//
// Parameters:
// - fab: ReaderIO containing a function
// - a: The value to apply to the function
//
// Returns a ReaderIO with the function applied to the value.
//
//go:inline
func MonadFlap[B, A any](fab ReaderIO[func(A) B], a A) ReaderIO[B] {
return RIO.MonadFlap(fab, a)
}
// Flap applies a value to a function wrapped in a [ReaderIO].
// This is the curried version of [MonadFlap].
//
// Parameters:
// - a: The value to apply to the function
//
// Returns a function that applies the value to a ReaderIO function.
//
//go:inline
func Flap[B, A any](a A) Operator[func(A) B, B] {
return RIO.Flap[context.Context, B](a)
}
// MonadChainReaderK chains a [ReaderIO] with a function that returns a [Reader].
// The Reader is lifted into the ReaderIO context, allowing composition of
// Reader and ReaderIO operations.
//
// Parameters:
// - ma: The ReaderIO to chain from
// - f: Function that produces a Reader
//
// Returns a new ReaderIO with the chained Reader computation.
//
//go:inline
func MonadChainReaderK[A, B any](ma ReaderIO[A], f reader.Kleisli[context.Context, A, B]) ReaderIO[B] {
return RIO.MonadChainReaderK(ma, f)
}
// ChainReaderK chains a [ReaderIO] with a function that returns a [Reader].
// This is the curried version of [MonadChainReaderK].
//
// Parameters:
// - f: Function that produces a Reader
//
// Returns a function that chains Reader-returning functions.
//
//go:inline
func ChainReaderK[A, B any](f reader.Kleisli[context.Context, A, B]) Operator[A, B] {
return RIO.ChainReaderK(f)
}
// MonadChainFirstReaderK chains a function that returns a [Reader] but keeps the original value.
// The Reader computation is executed for its side effects only.
//
// Parameters:
// - ma: The ReaderIO to chain from
// - f: Function that produces a Reader
//
// Returns a ReaderIO with the original value after executing the Reader.
//
//go:inline
func MonadChainFirstReaderK[A, B any](ma ReaderIO[A], f reader.Kleisli[context.Context, A, B]) ReaderIO[A] {
return RIO.MonadChainFirstReaderK(ma, f)
}
// MonadTapReaderK chains a function that returns a [Reader] but keeps the original value.
// This is an alias for [MonadChainFirstReaderK] and is useful for side effects.
//
// Parameters:
// - ma: The ReaderIO to tap
// - f: Function that produces a Reader for side effects
//
// Returns a ReaderIO with the original value after executing the Reader.
//
//go:inline
func MonadTapReaderK[A, B any](ma ReaderIO[A], f reader.Kleisli[context.Context, A, B]) ReaderIO[A] {
return RIO.MonadTapReaderK(ma, f)
}
// ChainFirstReaderK chains a function that returns a [Reader] but keeps the original value.
// This is the curried version of [MonadChainFirstReaderK].
//
// Parameters:
// - f: Function that produces a Reader
//
// Returns a function that chains Reader-returning functions while preserving the original value.
//
//go:inline
func ChainFirstReaderK[A, B any](f reader.Kleisli[context.Context, A, B]) Operator[A, A] {
return RIO.ChainFirstReaderK(f)
}
// TapReaderK chains a function that returns a [Reader] but keeps the original value.
// This is the curried version of [MonadTapReaderK], an alias for [ChainFirstReaderK].
//
// Parameters:
// - f: Function that produces a Reader for side effects
//
// Returns a function that taps with Reader-returning functions.
//
//go:inline
func TapReaderK[A, B any](f reader.Kleisli[context.Context, A, B]) Operator[A, A] {
return RIO.TapReaderK(f)
}
// Read executes a [ReaderIO] with a given context, returning the resulting [IO].
// This is useful for providing the context dependency and obtaining an IO action
// that can be executed later.
//
// Parameters:
// - r: The context to provide to the ReaderIO
//
// Returns a function that converts a ReaderIO into an IO by applying the context.
//
//go:inline
func Read[A any](r context.Context) func(ReaderIO[A]) IO[A] {
return RIO.Read[A](r)
}
// Local transforms the context.Context environment before passing it to a ReaderIO computation.
//
// This is the Reader's local operation, which allows you to modify the environment
// for a specific computation without affecting the outer context. The transformation
// function receives the current context and returns a new context along with a
// cancel function. The cancel function is automatically called when the computation
// completes (via defer), ensuring proper cleanup of resources.
//
// This is useful for:
// - Adding timeouts or deadlines to specific operations
// - Adding context values for nested computations
// - Creating isolated context scopes
// - Implementing context-based dependency injection
//
// Type Parameters:
// - A: The value type of the ReaderIO
//
// Parameters:
// - f: A function that transforms the context and returns a cancel function
//
// Returns:
// - An Operator that runs the computation with the transformed context
//
// Example:
//
// import F "github.com/IBM/fp-go/v2/function"
//
// // Add a custom value to the context
// type key int
// const userKey key = 0
//
// addUser := readerio.Local[string](func(ctx context.Context) (context.Context, context.CancelFunc) {
// newCtx := context.WithValue(ctx, userKey, "Alice")
// return newCtx, func() {} // No-op cancel
// })
//
// getUser := readerio.FromReader(func(ctx context.Context) string {
// if user := ctx.Value(userKey); user != nil {
// return user.(string)
// }
// return "unknown"
// })
//
// result := F.Pipe1(
// getUser,
// addUser,
// )
// user := result(context.Background())() // Returns "Alice"
//
// Timeout Example:
//
// // Add a 5-second timeout to a specific operation
// withTimeout := readerio.Local[Data](func(ctx context.Context) (context.Context, context.CancelFunc) {
// return context.WithTimeout(ctx, 5*time.Second)
// })
//
// result := F.Pipe1(
// fetchData,
// withTimeout,
// )
func Local[A any](f func(context.Context) (context.Context, context.CancelFunc)) Operator[A, A] {
return func(rr ReaderIO[A]) ReaderIO[A] {
return func(ctx context.Context) IO[A] {
return func() A {
otherCtx, otherCancel := f(ctx)
defer otherCancel()
return rr(otherCtx)()
}
}
}
}
// WithTimeout adds a timeout to the context for a ReaderIO computation.
//
// This is a convenience wrapper around Local that uses context.WithTimeout.
// The computation must complete within the specified duration, or it will be
// cancelled. This is useful for ensuring operations don't run indefinitely
// and for implementing timeout-based error handling.
//
// The timeout is relative to when the ReaderIO is executed, not when
// WithTimeout is called. The cancel function is automatically called when
// the computation completes, ensuring proper cleanup.
//
// Type Parameters:
// - A: The value type of the ReaderIO
//
// Parameters:
// - timeout: The maximum duration for the computation
//
// Returns:
// - An Operator that runs the computation with a timeout
//
// Example:
//
// import (
// "time"
// F "github.com/IBM/fp-go/v2/function"
// )
//
// // Fetch data with a 5-second timeout
// fetchData := readerio.FromReader(func(ctx context.Context) Data {
// // Simulate slow operation
// select {
// case <-time.After(10 * time.Second):
// return Data{Value: "slow"}
// case <-ctx.Done():
// return Data{}
// }
// })
//
// result := F.Pipe1(
// fetchData,
// readerio.WithTimeout[Data](5*time.Second),
// )
// data := result(context.Background())() // Returns Data{} after 5s timeout
//
// Successful Example:
//
// quickFetch := readerio.Of(Data{Value: "quick"})
// result := F.Pipe1(
// quickFetch,
// readerio.WithTimeout[Data](5*time.Second),
// )
// data := result(context.Background())() // Returns Data{Value: "quick"}
func WithTimeout[A any](timeout time.Duration) Operator[A, A] {
return Local[A](func(ctx context.Context) (context.Context, context.CancelFunc) {
return context.WithTimeout(ctx, timeout)
})
}
// WithDeadline adds an absolute deadline to the context for a ReaderIO computation.
//
// This is a convenience wrapper around Local that uses context.WithDeadline.
// The computation must complete before the specified time, or it will be
// cancelled. This is useful for coordinating operations that must finish
// by a specific time, such as request deadlines or scheduled tasks.
//
// The deadline is an absolute time, unlike WithTimeout which uses a relative
// duration. The cancel function is automatically called when the computation
// completes, ensuring proper cleanup.
//
// Type Parameters:
// - A: The value type of the ReaderIO
//
// Parameters:
// - deadline: The absolute time by which the computation must complete
//
// Returns:
// - An Operator that runs the computation with a deadline
//
// Example:
//
// import (
// "time"
// F "github.com/IBM/fp-go/v2/function"
// )
//
// // Operation must complete by 3 PM
// deadline := time.Date(2024, 1, 1, 15, 0, 0, 0, time.UTC)
//
// fetchData := readerio.FromReader(func(ctx context.Context) Data {
// // Simulate operation
// select {
// case <-time.After(1 * time.Hour):
// return Data{Value: "done"}
// case <-ctx.Done():
// return Data{}
// }
// })
//
// result := F.Pipe1(
// fetchData,
// readerio.WithDeadline[Data](deadline),
// )
// data := result(context.Background())() // Returns Data{} if past deadline
//
// Combining with Parent Context:
//
// // If parent context already has a deadline, the earlier one takes precedence
// parentCtx, cancel := context.WithDeadline(context.Background(), time.Now().Add(1*time.Hour))
// defer cancel()
//
// laterDeadline := time.Now().Add(2 * time.Hour)
// result := F.Pipe1(
// fetchData,
// readerio.WithDeadline[Data](laterDeadline),
// )
// data := result(parentCtx)() // Will use parent's 1-hour deadline
func WithDeadline[A any](deadline time.Time) Operator[A, A] {
return Local[A](func(ctx context.Context) (context.Context, context.CancelFunc) {
return context.WithDeadline(ctx, deadline)
})
}
// Delay creates an operation that passes in the value after some delay
//
//go:inline
func Delay[A any](delay time.Duration) Operator[A, A] {
return RIO.Delay[context.Context, A](delay)
}
// After creates an operation that passes after the given [time.Time]
//
//go:inline
func After[R, E, A any](timestamp time.Time) Operator[A, A] {
return RIO.After[context.Context, A](timestamp)
}

View File

@@ -0,0 +1,502 @@
// Copyright (c) 2023 - 2025 IBM Corp.
// All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package readerio
import (
"context"
"testing"
F "github.com/IBM/fp-go/v2/function"
"github.com/IBM/fp-go/v2/internal/utils"
G "github.com/IBM/fp-go/v2/io"
N "github.com/IBM/fp-go/v2/number"
"github.com/IBM/fp-go/v2/reader"
"github.com/stretchr/testify/assert"
)
func TestMonadMap(t *testing.T) {
rio := Of(5)
doubled := MonadMap(rio, N.Mul(2))
result := doubled(context.Background())()
assert.Equal(t, 10, result)
}
func TestMap(t *testing.T) {
g := F.Pipe1(
Of(1),
Map(utils.Double),
)
assert.Equal(t, 2, g(context.Background())())
}
func TestMonadMapTo(t *testing.T) {
rio := Of(42)
replaced := MonadMapTo(rio, "constant")
result := replaced(context.Background())()
assert.Equal(t, "constant", result)
}
func TestMapTo(t *testing.T) {
result := F.Pipe1(
Of(42),
MapTo[int]("constant"),
)
assert.Equal(t, "constant", result(context.Background())())
}
func TestMonadChain(t *testing.T) {
rio1 := Of(5)
result := MonadChain(rio1, func(n int) ReaderIO[int] {
return Of(n * 3)
})
assert.Equal(t, 15, result(context.Background())())
}
func TestChain(t *testing.T) {
result := F.Pipe1(
Of(5),
Chain(func(n int) ReaderIO[int] {
return Of(n * 3)
}),
)
assert.Equal(t, 15, result(context.Background())())
}
func TestMonadChainFirst(t *testing.T) {
sideEffect := 0
rio := Of(42)
result := MonadChainFirst(rio, func(n int) ReaderIO[string] {
sideEffect = n
return Of("side effect")
})
value := result(context.Background())()
assert.Equal(t, 42, value)
assert.Equal(t, 42, sideEffect)
}
func TestChainFirst(t *testing.T) {
sideEffect := 0
result := F.Pipe1(
Of(42),
ChainFirst(func(n int) ReaderIO[string] {
sideEffect = n
return Of("side effect")
}),
)
value := result(context.Background())()
assert.Equal(t, 42, value)
assert.Equal(t, 42, sideEffect)
}
func TestMonadTap(t *testing.T) {
sideEffect := 0
rio := Of(42)
result := MonadTap(rio, func(n int) ReaderIO[func()] {
sideEffect = n
return Of(func() {})
})
value := result(context.Background())()
assert.Equal(t, 42, value)
assert.Equal(t, 42, sideEffect)
}
func TestTap(t *testing.T) {
sideEffect := 0
result := F.Pipe1(
Of(42),
Tap(func(n int) ReaderIO[func()] {
sideEffect = n
return Of(func() {})
}),
)
value := result(context.Background())()
assert.Equal(t, 42, value)
assert.Equal(t, 42, sideEffect)
}
func TestOf(t *testing.T) {
rio := Of(100)
result := rio(context.Background())()
assert.Equal(t, 100, result)
}
func TestMonadAp(t *testing.T) {
fabIO := Of(N.Mul(2))
faIO := Of(5)
result := MonadAp(fabIO, faIO)
assert.Equal(t, 10, result(context.Background())())
}
func TestAp(t *testing.T) {
g := F.Pipe1(
Of(utils.Double),
Ap[int](Of(1)),
)
assert.Equal(t, 2, g(context.Background())())
}
func TestMonadApSeq(t *testing.T) {
fabIO := Of(N.Add(10))
faIO := Of(5)
result := MonadApSeq(fabIO, faIO)
assert.Equal(t, 15, result(context.Background())())
}
func TestApSeq(t *testing.T) {
g := F.Pipe1(
Of(N.Add(10)),
ApSeq[int](Of(5)),
)
assert.Equal(t, 15, g(context.Background())())
}
func TestMonadApPar(t *testing.T) {
fabIO := Of(N.Add(10))
faIO := Of(5)
result := MonadApPar(fabIO, faIO)
assert.Equal(t, 15, result(context.Background())())
}
func TestApPar(t *testing.T) {
g := F.Pipe1(
Of(N.Add(10)),
ApPar[int](Of(5)),
)
assert.Equal(t, 15, g(context.Background())())
}
func TestAsk(t *testing.T) {
rio := Ask()
ctx := context.WithValue(context.Background(), "key", "value")
result := rio(ctx)()
assert.Equal(t, ctx, result)
}
func TestFromIO(t *testing.T) {
ioAction := G.Of(42)
rio := FromIO(ioAction)
result := rio(context.Background())()
assert.Equal(t, 42, result)
}
func TestFromReader(t *testing.T) {
rdr := func(ctx context.Context) int {
return 42
}
rio := FromReader(rdr)
result := rio(context.Background())()
assert.Equal(t, 42, result)
}
func TestFromLazy(t *testing.T) {
lazy := func() int { return 42 }
rio := FromLazy(lazy)
result := rio(context.Background())()
assert.Equal(t, 42, result)
}
func TestMonadChainIOK(t *testing.T) {
rio := Of(5)
result := MonadChainIOK(rio, func(n int) G.IO[int] {
return G.Of(n * 4)
})
assert.Equal(t, 20, result(context.Background())())
}
func TestChainIOK(t *testing.T) {
result := F.Pipe1(
Of(5),
ChainIOK(func(n int) G.IO[int] {
return G.Of(n * 4)
}),
)
assert.Equal(t, 20, result(context.Background())())
}
func TestMonadChainFirstIOK(t *testing.T) {
sideEffect := 0
rio := Of(42)
result := MonadChainFirstIOK(rio, func(n int) G.IO[string] {
sideEffect = n
return G.Of("side effect")
})
value := result(context.Background())()
assert.Equal(t, 42, value)
assert.Equal(t, 42, sideEffect)
}
func TestChainFirstIOK(t *testing.T) {
sideEffect := 0
result := F.Pipe1(
Of(42),
ChainFirstIOK(func(n int) G.IO[string] {
sideEffect = n
return G.Of("side effect")
}),
)
value := result(context.Background())()
assert.Equal(t, 42, value)
assert.Equal(t, 42, sideEffect)
}
func TestMonadTapIOK(t *testing.T) {
sideEffect := 0
rio := Of(42)
result := MonadTapIOK(rio, func(n int) G.IO[func()] {
sideEffect = n
return G.Of(func() {})
})
value := result(context.Background())()
assert.Equal(t, 42, value)
assert.Equal(t, 42, sideEffect)
}
func TestTapIOK(t *testing.T) {
sideEffect := 0
result := F.Pipe1(
Of(42),
TapIOK(func(n int) G.IO[func()] {
sideEffect = n
return G.Of(func() {})
}),
)
value := result(context.Background())()
assert.Equal(t, 42, value)
assert.Equal(t, 42, sideEffect)
}
func TestDefer(t *testing.T) {
counter := 0
rio := Defer(func() ReaderIO[int] {
counter++
return Of(counter)
})
result1 := rio(context.Background())()
result2 := rio(context.Background())()
assert.Equal(t, 1, result1)
assert.Equal(t, 2, result2)
}
func TestMemoize(t *testing.T) {
counter := 0
rio := Of(0)
memoized := Memoize(MonadMap(rio, func(int) int {
counter++
return counter
}))
result1 := memoized(context.Background())()
result2 := memoized(context.Background())()
assert.Equal(t, 1, result1)
assert.Equal(t, 1, result2) // Same value, memoized
}
func TestFlatten(t *testing.T) {
nested := Of(Of(42))
flattened := Flatten(nested)
result := flattened(context.Background())()
assert.Equal(t, 42, result)
}
func TestMonadFlap(t *testing.T) {
fabIO := Of(N.Mul(3))
result := MonadFlap(fabIO, 7)
assert.Equal(t, 21, result(context.Background())())
}
func TestFlap(t *testing.T) {
result := F.Pipe1(
Of(N.Mul(3)),
Flap[int](7),
)
assert.Equal(t, 21, result(context.Background())())
}
func TestMonadChainReaderK(t *testing.T) {
rio := Of(5)
result := MonadChainReaderK(rio, func(n int) reader.Reader[context.Context, int] {
return func(ctx context.Context) int { return n * 2 }
})
assert.Equal(t, 10, result(context.Background())())
}
func TestChainReaderK(t *testing.T) {
result := F.Pipe1(
Of(5),
ChainReaderK(func(n int) reader.Reader[context.Context, int] {
return func(ctx context.Context) int { return n * 2 }
}),
)
assert.Equal(t, 10, result(context.Background())())
}
func TestMonadChainFirstReaderK(t *testing.T) {
sideEffect := 0
rio := Of(42)
result := MonadChainFirstReaderK(rio, func(n int) reader.Reader[context.Context, string] {
return func(ctx context.Context) string {
sideEffect = n
return "side effect"
}
})
value := result(context.Background())()
assert.Equal(t, 42, value)
assert.Equal(t, 42, sideEffect)
}
func TestChainFirstReaderK(t *testing.T) {
sideEffect := 0
result := F.Pipe1(
Of(42),
ChainFirstReaderK(func(n int) reader.Reader[context.Context, string] {
return func(ctx context.Context) string {
sideEffect = n
return "side effect"
}
}),
)
value := result(context.Background())()
assert.Equal(t, 42, value)
assert.Equal(t, 42, sideEffect)
}
func TestMonadTapReaderK(t *testing.T) {
sideEffect := 0
rio := Of(42)
result := MonadTapReaderK(rio, func(n int) reader.Reader[context.Context, func()] {
return func(ctx context.Context) func() {
sideEffect = n
return func() {}
}
})
value := result(context.Background())()
assert.Equal(t, 42, value)
assert.Equal(t, 42, sideEffect)
}
func TestTapReaderK(t *testing.T) {
sideEffect := 0
result := F.Pipe1(
Of(42),
TapReaderK(func(n int) reader.Reader[context.Context, func()] {
return func(ctx context.Context) func() {
sideEffect = n
return func() {}
}
}),
)
value := result(context.Background())()
assert.Equal(t, 42, value)
assert.Equal(t, 42, sideEffect)
}
func TestRead(t *testing.T) {
rio := Of(42)
ctx := context.Background()
ioAction := Read[int](ctx)(rio)
result := ioAction()
assert.Equal(t, 42, result)
}
func TestComplexPipeline(t *testing.T) {
// Test a complex pipeline combining multiple operations
result := F.Pipe3(
Ask(),
Map(func(ctx context.Context) int { return 5 }),
Chain(func(n int) ReaderIO[int] {
return Of(n * 2)
}),
Map(N.Add(10)),
)
assert.Equal(t, 20, result(context.Background())()) // (5 * 2) + 10 = 20
}
func TestFromIOWithChain(t *testing.T) {
ioAction := G.Of(10)
result := F.Pipe1(
FromIO(ioAction),
Chain(func(n int) ReaderIO[int] {
return Of(n + 5)
}),
)
assert.Equal(t, 15, result(context.Background())())
}
func TestTapWithLogging(t *testing.T) {
// Simulate logging scenario
logged := []int{}
result := F.Pipe3(
Of(42),
Tap(func(n int) ReaderIO[func()] {
logged = append(logged, n)
return Of(func() {})
}),
Map(N.Mul(2)),
Tap(func(n int) ReaderIO[func()] {
logged = append(logged, n)
return Of(func() {})
}),
)
value := result(context.Background())()
assert.Equal(t, 84, value)
assert.Equal(t, []int{42, 84}, logged)
}

View File

@@ -0,0 +1,86 @@
// Copyright (c) 2023 - 2025 IBM Corp.
// All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package readerio
import (
"github.com/IBM/fp-go/v2/readerio"
)
// TailRec implements stack-safe tail recursion for the ReaderIO monad.
//
// This function enables recursive computations that depend on a [context.Context] and
// perform side effects, without risking stack overflow. It uses an iterative loop to
// execute the recursion, making it safe for deep or unbounded recursion.
//
// The function takes a Kleisli arrow that returns Trampoline[A, B]:
// - Bounce(A): Continue recursion with the new state A
// - Land(B): Terminate recursion and return the final result B
//
// Type Parameters:
// - A: The state type that changes during recursion
// - B: The final result type when recursion terminates
//
// Parameters:
// - f: A Kleisli arrow (A => ReaderIO[Trampoline[A, B]]) that controls recursion flow
//
// Returns:
// - A Kleisli arrow (A => ReaderIO[B]) that executes the recursion safely
//
// Example - Countdown:
//
// countdownStep := func(n int) ReaderIO[tailrec.Trampoline[int, string]] {
// return func(ctx context.Context) IO[tailrec.Trampoline[int, string]] {
// return func() tailrec.Trampoline[int, string] {
// if n <= 0 {
// return tailrec.Land[int]("Done!")
// }
// return tailrec.Bounce[string](n - 1)
// }
// }
// }
//
// countdown := TailRec(countdownStep)
// result := countdown(10)(context.Background())() // Returns "Done!"
//
// Example - Sum with context:
//
// type SumState struct {
// numbers []int
// total int
// }
//
// sumStep := func(state SumState) ReaderIO[tailrec.Trampoline[SumState, int]] {
// return func(ctx context.Context) IO[tailrec.Trampoline[SumState, int]] {
// return func() tailrec.Trampoline[SumState, int] {
// if len(state.numbers) == 0 {
// return tailrec.Land[SumState](state.total)
// }
// return tailrec.Bounce[int](SumState{
// numbers: state.numbers[1:],
// total: state.total + state.numbers[0],
// })
// }
// }
// }
//
// sum := TailRec(sumStep)
// result := sum(SumState{numbers: []int{1, 2, 3, 4, 5}})(context.Background())()
// // Returns 15, safe even for very large slices
//
//go:inline
func TailRec[A, B any](f Kleisli[A, Trampoline[A, B]]) Kleisli[A, B] {
return readerio.TailRec(f)
}

View File

@@ -0,0 +1,106 @@
// Copyright (c) 2025 IBM Corp.
// All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package readerio
import (
"github.com/IBM/fp-go/v2/retry"
RG "github.com/IBM/fp-go/v2/retry/generic"
)
// Retrying retries a ReaderIO computation according to a retry policy.
//
// This function implements a retry mechanism for operations that depend on a [context.Context]
// and perform side effects (IO). The retry loop continues until one of the following occurs:
// - The action succeeds and the check function returns false (no retry needed)
// - The retry policy returns None (retry limit reached)
// - The check function returns false (indicating success or a non-retryable condition)
//
// Type Parameters:
// - A: The type of the value produced by the action
//
// Parameters:
//
// - policy: A RetryPolicy that determines when and how long to wait between retries.
// The policy receives a RetryStatus on each iteration and returns an optional delay.
// If it returns None, retrying stops. Common policies include LimitRetries,
// ExponentialBackoff, and CapDelay from the retry package.
//
// - action: A Kleisli arrow that takes a RetryStatus and returns a ReaderIO[A].
// This function is called on each retry attempt and receives information about the
// current retry state (iteration number, cumulative delay, etc.).
//
// - check: A predicate function that examines the result A and returns true if the
// operation should be retried, or false if it should stop. This allows you to
// distinguish between retryable conditions and successful/permanent results.
//
// Returns:
// - A ReaderIO[A] that, when executed with a context, will perform the retry logic
// and return the final result.
//
// Example:
//
// // Create a retry policy: exponential backoff with a cap, limited to 5 retries
// policy := M.Concat(
// retry.LimitRetries(5),
// retry.CapDelay(10*time.Second, retry.ExponentialBackoff(100*time.Millisecond)),
// )(retry.Monoid)
//
// // Action that fetches data, with retry status information
// fetchData := func(status retry.RetryStatus) ReaderIO[string] {
// return func(ctx context.Context) IO[string] {
// return func() string {
// // Simulate an operation that might fail
// if status.IterNumber < 3 {
// return "" // Empty result indicates failure
// }
// return "success"
// }
// }
// }
//
// // Check function: retry if result is empty
// shouldRetry := func(s string) bool {
// return s == ""
// }
//
// // Create the retrying computation
// retryingFetch := Retrying(policy, fetchData, shouldRetry)
//
// // Execute
// ctx := context.Background()
// result := retryingFetch(ctx)() // Returns "success" after 3 attempts
//
//go:inline
func Retrying[A any](
policy retry.RetryPolicy,
action Kleisli[retry.RetryStatus, A],
check Predicate[A],
) ReaderIO[A] {
// get an implementation for the types
return RG.Retrying(
Chain[A, Trampoline[retry.RetryStatus, A]],
Map[retry.RetryStatus, Trampoline[retry.RetryStatus, A]],
Of[Trampoline[retry.RetryStatus, A]],
Of[retry.RetryStatus],
Delay[retry.RetryStatus],
TailRec,
policy,
action,
check,
)
}

View File

@@ -0,0 +1,84 @@
// Copyright (c) 2023 - 2025 IBM Corp.
// All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package readerio
import (
"context"
"github.com/IBM/fp-go/v2/consumer"
"github.com/IBM/fp-go/v2/either"
"github.com/IBM/fp-go/v2/function"
"github.com/IBM/fp-go/v2/io"
"github.com/IBM/fp-go/v2/lazy"
"github.com/IBM/fp-go/v2/predicate"
"github.com/IBM/fp-go/v2/reader"
"github.com/IBM/fp-go/v2/readerio"
"github.com/IBM/fp-go/v2/tailrec"
)
type (
// Lazy represents a deferred computation that produces a value of type A when executed.
// The computation is not executed until explicitly invoked.
Lazy[A any] = lazy.Lazy[A]
// IO represents a side-effectful computation that produces a value of type A.
// The computation is deferred and only executed when invoked.
//
// IO[A] is equivalent to func() A
IO[A any] = io.IO[A]
// Reader represents a computation that depends on a context of type R.
// This is used for dependency injection and accessing shared context.
//
// Reader[R, A] is equivalent to func(R) A
Reader[R, A any] = reader.Reader[R, A]
// ReaderIO represents a context-dependent computation that performs side effects.
// This is specialized to use [context.Context] as the context type.
//
// ReaderIO[A] is equivalent to func(context.Context) func() A
ReaderIO[A any] = readerio.ReaderIO[context.Context, A]
// Kleisli represents a Kleisli arrow for the ReaderIO monad.
// It is a function that takes a value of type A and returns a ReaderIO computation
// that produces a value of type B.
//
// Kleisli arrows are used for composing monadic computations and are fundamental
// to functional programming patterns involving effects and context.
//
// Kleisli[A, B] is equivalent to func(A) func(context.Context) func() B
Kleisli[A, B any] = reader.Reader[A, ReaderIO[B]]
// Operator represents a transformation from one ReaderIO computation to another.
// It takes a ReaderIO[A] and returns a ReaderIO[B], allowing for the composition
// of context-dependent, side-effectful computations.
//
// Operators are useful for building pipelines of ReaderIO computations where
// each step can depend on the previous computation's result.
//
// Operator[A, B] is equivalent to func(ReaderIO[A]) func(context.Context) func() B
Operator[A, B any] = Kleisli[ReaderIO[A], B]
Consumer[A any] = consumer.Consumer[A]
Either[E, A any] = either.Either[E, A]
Trampoline[B, L any] = tailrec.Trampoline[B, L]
Predicate[A any] = predicate.Predicate[A]
Void = function.Void
)

View File

@@ -0,0 +1,801 @@
# Sequence Functions and Point-Free Style Programming
This document explains how the `Sequence*` functions in the `context/readerioresult` package enable point-free style programming and improve code composition.
## Table of Contents
1. [What is Point-Free Style?](#what-is-point-free-style)
2. [The Problem: Nested Function Application](#the-problem-nested-function-application)
3. [The Solution: Sequence Functions](#the-solution-sequence-functions)
4. [How Sequence Enables Point-Free Style](#how-sequence-enables-point-free-style)
5. [TraverseReader: Introducing Dependencies](#traversereader-introducing-dependencies)
6. [Practical Benefits](#practical-benefits)
7. [Examples](#examples)
8. [Comparison: With and Without Sequence](#comparison-with-and-without-sequence)
## What is Point-Free Style?
Point-free style (also called tacit programming) is a programming paradigm where function definitions don't explicitly mention their arguments. Instead, functions are composed using combinators and higher-order functions.
**Traditional style (with points):**
```go
func double(x int) int {
return x * 2
}
```
**Point-free style (without points):**
```go
var double = N.Mul(2)
```
The key benefit is that point-free style emphasizes **what** the function does (its transformation) rather than **how** it manipulates data.
## The Problem: Nested Function Application
In functional programming with monadic types like `ReaderIOResult`, we often have nested structures where we need to apply parameters in a specific order. Consider:
```go
type ReaderIOResult[A any] = func(context.Context) func() Either[error, A]
type Reader[R, A any] = func(R) A
// A computation that produces a Reader
type Computation = ReaderIOResult[Reader[Config, int]]
// Expands to: func(context.Context) func() Either[error, func(Config) int]
```
To use this, we must apply parameters in this order:
1. First, provide `context.Context`
2. Then, execute the IO effect (call the function)
3. Then, unwrap the `Either` to get the `Reader`
4. Finally, provide the `Config`
This creates several problems:
### Problem 1: Awkward Parameter Order
```go
computation := getComputation()
ctx := context.Background()
cfg := Config{Value: 42}
// Must apply in this specific order
result := computation(ctx)() // Get Either[error, Reader[Config, int]]
if reader, err := either.Unwrap(result); err == nil {
value := reader(cfg) // Finally apply Config
// use value
}
```
The `Config` parameter, which is often known early and stable, must be provided last. This prevents partial application and reuse.
### Problem 2: Cannot Partially Apply Dependencies
```go
// Want to do this: create a reusable computation with Config baked in
// But can't because Config comes last!
withConfig := computation(cfg) // ❌ Doesn't work - cfg comes last, not first
```
### Problem 3: Breaks Point-Free Composition
```go
// Want to compose like this:
var pipeline = F.Flow3(
getComputation,
applyConfig(cfg), // ❌ Can't do this - Config comes last
processResult,
)
```
## The Solution: Sequence Functions
The `Sequence*` functions solve this by "flipping" or "sequencing" the nested structure, changing the order in which parameters are applied.
### SequenceReader
```go
func SequenceReader[R, A any](
ma ReaderIOResult[Reader[R, A]]
) Kleisli[R, A]
```
**Type transformation:**
```
From: func(context.Context) func() Either[error, func(R) A]
To: func(R) func(context.Context) func() Either[error, A]
```
Now `R` (the Reader's environment) comes **first**, before `context.Context`!
### SequenceReaderIO
```go
func SequenceReaderIO[R, A any](
ma ReaderIOResult[ReaderIO[R, A]]
) Kleisli[R, A]
```
**Type transformation:**
```
From: func(context.Context) func() Either[error, func(R) func() A]
To: func(R) func(context.Context) func() Either[error, A]
```
### SequenceReaderResult
```go
func SequenceReaderResult[R, A any](
ma ReaderIOResult[ReaderResult[R, A]]
) Kleisli[R, A]
```
**Type transformation:**
```
From: func(context.Context) func() Either[error, func(R) Either[error, A]]
To: func(R) func(context.Context) func() Either[error, A]
```
## How Sequence Enables Point-Free Style
### 1. Partial Application
By moving the environment parameter first, we can partially apply it:
```go
type Config struct { Multiplier int }
computation := getComputation() // ReaderIOResult[Reader[Config, int]]
sequenced := SequenceReader[Config, int](computation)
// Partially apply Config
cfg := Config{Multiplier: 5}
withConfig := sequenced(cfg) // ✅ Now we have ReaderIOResult[int]
// Reuse with different contexts
result1 := withConfig(ctx1)()
result2 := withConfig(ctx2)()
```
### 2. Dependency Injection
Inject dependencies early in the pipeline:
```go
type Database struct { ConnectionString string }
makeQuery := func(ctx context.Context) func() Either[error, func(Database) string] {
// ... implementation
}
// Sequence to enable DI
queryWithDB := SequenceReader[Database, string](makeQuery)
// Inject database
db := Database{ConnectionString: "localhost:5432"}
query := queryWithDB(db) // ✅ Database injected
// Use query with any context
result := query(context.Background())()
```
### 3. Point-Free Composition
Build pipelines without mentioning intermediate values:
```go
var pipeline = F.Flow3(
getComputation, // ReaderIOResult[Reader[Config, int]]
SequenceReader[Config, int], // func(Config) ReaderIOResult[int]
applyConfig(cfg), // ReaderIOResult[int]
)
// Or with partial application:
var withConfig = F.Pipe1(
getComputation(),
SequenceReader[Config, int],
)
result := withConfig(cfg)(ctx)()
```
### 4. Reusable Computations
Create specialized versions of generic computations:
```go
// Generic computation
makeServiceInfo := func(ctx context.Context) func() Either[error, func(ServiceConfig) string] {
// ... implementation
}
sequenced := SequenceReader[ServiceConfig, string](makeServiceInfo)
// Create specialized versions
authService := sequenced(ServiceConfig{Name: "Auth", Version: "1.0"})
userService := sequenced(ServiceConfig{Name: "User", Version: "2.0"})
// Reuse across contexts
authInfo := authService(ctx)()
userInfo := userService(ctx)()
```
## TraverseReader: Introducing Dependencies
While `SequenceReader` flips the parameter order of an existing nested structure, `TraverseReader` allows you to **introduce** a new Reader dependency into an existing computation.
### Function Signature
```go
func TraverseReader[R, A, B any](
f reader.Kleisli[R, A, B],
) func(ReaderIOResult[A]) Kleisli[R, B]
```
**Type transformation:**
```
Input: ReaderIOResult[A] = func(context.Context) func() Either[error, A]
With: reader.Kleisli[R, A, B] = func(A) func(R) B
Output: Kleisli[R, B] = func(R) func(context.Context) func() Either[error, B]
```
### What It Does
`TraverseReader` takes:
1. A Reader-based transformation `f: func(A) func(R) B` that depends on environment `R`
2. Returns a function that transforms `ReaderIOResult[A]` into `Kleisli[R, B]`
This allows you to:
- Add environment dependencies to computations that don't have them yet
- Transform values within a ReaderIOResult using environment-dependent logic
- Build composable pipelines where transformations depend on configuration
### Key Difference from SequenceReader
- **SequenceReader**: Works with computations that **already contain** a Reader (`ReaderIOResult[Reader[R, A]]`)
- Flips the order so `R` comes first
- No transformation of the value itself
- **TraverseReader**: Works with computations that **don't have** a Reader yet (`ReaderIOResult[A]`)
- Introduces a new Reader dependency via a transformation function
- Transforms `A` to `B` using environment `R`
### Example: Adding Configuration to a Computation
```go
type Config struct {
Multiplier int
Prefix string
}
// Original computation that just produces an int
getValue := func(ctx context.Context) func() Either[error, int] {
return func() Either[error, int] {
return Right[error](10)
}
}
// A Reader-based transformation that depends on Config
formatWithConfig := func(n int) func(Config) string {
return func(cfg Config) string {
result := n * cfg.Multiplier
return fmt.Sprintf("%s: %d", cfg.Prefix, result)
}
}
// Use TraverseReader to introduce Config dependency
traversed := TraverseReader[Config, int, string](formatWithConfig)
withConfig := traversed(getValue)
// Now we can provide Config to get the final result
cfg := Config{Multiplier: 5, Prefix: "Result"}
ctx := context.Background()
result := withConfig(cfg)(ctx)() // Returns Right("Result: 50")
```
### Point-Free Composition with TraverseReader
```go
// Build a pipeline that introduces dependencies at each stage
var pipeline = F.Flow4(
loadValue, // ReaderIOResult[int]
TraverseReader(multiplyByConfig), // Kleisli[Config, int]
applyConfig(cfg), // ReaderIOResult[int]
Chain(TraverseReader(formatWithStyle)), // Introduce another dependency
)
```
### When to Use TraverseReader vs SequenceReader
**Use SequenceReader when:**
- Your computation already returns a Reader: `ReaderIOResult[Reader[R, A]]`
- You just want to flip the parameter order
- No transformation of the value is needed
```go
// Already have Reader[Config, int]
computation := getComputation() // ReaderIOResult[Reader[Config, int]]
sequenced := SequenceReader[Config, int](computation)
result := sequenced(cfg)(ctx)()
```
**Use TraverseReader when:**
- Your computation doesn't have a Reader yet: `ReaderIOResult[A]`
- You want to transform the value using environment-dependent logic
- You're introducing a new dependency into the pipeline
```go
// Have ReaderIOResult[int], want to add Config dependency
computation := getValue() // ReaderIOResult[int]
traversed := TraverseReader[Config, int, string](formatWithConfig)
withDep := traversed(computation)
result := withDep(cfg)(ctx)()
```
### Practical Example: Multi-Stage Processing
```go
type DatabaseConfig struct {
ConnectionString string
Timeout time.Duration
}
type FormattingConfig struct {
DateFormat string
Timezone string
}
// Stage 1: Load raw data (no dependencies yet)
loadData := func(ctx context.Context) func() Either[error, RawData] {
// ... implementation
}
// Stage 2: Process with database config
processWithDB := func(raw RawData) func(DatabaseConfig) ProcessedData {
return func(cfg DatabaseConfig) ProcessedData {
// Use cfg.ConnectionString, cfg.Timeout
return ProcessedData{/* ... */}
}
}
// Stage 3: Format with formatting config
formatData := func(processed ProcessedData) func(FormattingConfig) string {
return func(cfg FormattingConfig) string {
// Use cfg.DateFormat, cfg.Timezone
return "formatted result"
}
}
// Build pipeline introducing dependencies at each stage
var pipeline = F.Flow3(
loadData,
TraverseReader[DatabaseConfig, RawData, ProcessedData](processWithDB),
// Now we have Kleisli[DatabaseConfig, ProcessedData]
applyConfig(dbConfig),
// Now we have ReaderIOResult[ProcessedData]
TraverseReader[FormattingConfig, ProcessedData, string](formatData),
// Now we have Kleisli[FormattingConfig, string]
)
// Execute with both configs
result := pipeline(fmtConfig)(ctx)()
```
### Combining TraverseReader and SequenceReader
You can combine both functions in complex pipelines:
```go
// Start with nested Reader
computation := getComputation() // ReaderIOResult[Reader[Config, User]]
var pipeline = F.Flow4(
computation,
SequenceReader[Config, User], // Flip to get Kleisli[Config, User]
applyConfig(cfg), // Apply config, get ReaderIOResult[User]
TraverseReader(enrichWithDatabase), // Add database dependency
// Now have Kleisli[Database, EnrichedUser]
)
result := pipeline(db)(ctx)()
```
## Practical Benefits
### 1. **Performance: Eager Construction, Lazy Execution**
One of the most important but often overlooked benefits of point-free style is its performance characteristic: **the program structure is constructed eagerly (at definition time), but execution happens lazily (at runtime)**.
#### Construction Happens Once
When you define a pipeline using point-free style with `F.Flow`, `F.Pipe`, or function composition, the composition structure is built immediately at definition time:
```go
// Point-free style - composition built ONCE at definition time
var processUser = F.Flow3(
getDatabase,
SequenceReader[DatabaseConfig, Database],
applyConfig(dbConfig),
)
// The pipeline structure is now fixed in memory
```
#### Execution Happens on Demand
The actual computation only runs when you provide the final parameters and invoke the result:
```go
// Execute multiple times - only execution cost, no re-composition
result1 := processUser(ctx1)() // Fast - reuses pre-built pipeline
result2 := processUser(ctx2)() // Fast - reuses pre-built pipeline
result3 := processUser(ctx3)() // Fast - reuses pre-built pipeline
```
#### Performance Benefit for Repeated Execution
If a flow is executed multiple times, the point-free style is significantly more efficient because:
1. **Composition overhead is paid once** - The function composition happens at definition time
2. **No re-interpretation** - Each execution doesn't need to rebuild the pipeline
3. **Memory efficiency** - The composed function is created once and reused
4. **Better for hot paths** - Ideal for high-frequency operations
#### Comparison: Point-Free vs. Imperative
```go
// Imperative style - reconstruction on EVERY call
func processUserImperative(ctx context.Context) Either[error, Database] {
// This function body is re-interpreted/executed every time
dbComp := getDatabase()(ctx)()
if dbReader, err := either.Unwrap(dbComp); err != nil {
return Left[Database](err)
}
db := dbReader(dbConfig)
// ... manual composition happens on every invocation
return Right[error](db)
}
// Point-free style - composition built ONCE
var processUserPointFree = F.Flow3(
getDatabase,
SequenceReader[DatabaseConfig, Database],
applyConfig(dbConfig),
)
// Benchmark scenario: 1000 executions
for i := 0; i < 1000; i++ {
// Imperative: pays composition cost 1000 times
result := processUserImperative(ctx)()
// Point-free: pays composition cost once, execution cost 1000 times
result := processUserPointFree(ctx)()
}
```
#### When This Matters Most
The performance benefit of eager construction is particularly important for:
- **High-frequency operations** - APIs, event handlers, request processors
- **Batch processing** - Same pipeline processes many items
- **Long-running services** - Pipelines defined once at startup, executed millions of times
- **Hot code paths** - Performance-critical sections that run repeatedly
- **Stream processing** - Processing continuous data streams
#### Example: API Handler
```go
// Define pipeline once at application startup
var handleUserRequest = F.Flow4(
parseRequest,
SequenceReader[Database, UserRequest],
applyDatabase(db),
Chain(validateAndProcess),
)
// Execute thousands of times per second
func apiHandler(w http.ResponseWriter, r *http.Request) {
// No composition overhead - just execution
result := handleUserRequest(r.Context())()
// ... handle result
}
```
#### Memory and CPU Efficiency
```go
// Point-free: O(1) composition overhead
var pipeline = F.Flow5(step1, step2, step3, step4, step5)
// Composed once, stored in memory
// Execute N times: O(N) execution cost only
for i := 0; i < N; i++ {
result := pipeline(input[i])
}
// Imperative: O(N) composition + execution cost
for i := 0; i < N; i++ {
// Composition logic runs every iteration
result := step5(step4(step3(step2(step1(input[i])))))
}
```
### 2. **Improved Testability**
Inject test dependencies easily:
```go
// Production
prodDB := Database{ConnectionString: "prod:5432"}
prodQuery := queryWithDB(prodDB)
// Testing
testDB := Database{ConnectionString: "test:5432"}
testQuery := queryWithDB(testDB)
// Same computation, different dependencies
```
### 3. **Better Separation of Concerns**
Separate configuration from execution:
```go
// Configuration phase (pure, no effects)
cfg := loadConfig()
computation := sequenced(cfg)
// Execution phase (with effects)
result := computation(ctx)()
```
### 4. **Enhanced Composability**
Build complex pipelines from simple pieces:
```go
var processUser = F.Flow4(
loadUserConfig, // ReaderIOResult[Reader[Database, User]]
SequenceReader, // func(Database) ReaderIOResult[User]
applyDatabase(db), // ReaderIOResult[User]
Chain(validateUser), // ReaderIOResult[ValidatedUser]
)
```
### 5. **Reduced Boilerplate**
No need to manually thread parameters:
```go
// Without Sequence - manual threading
func processWithConfig(cfg Config) ReaderIOResult[Result] {
return func(ctx context.Context) func() Either[error, Result] {
return func() Either[error, Result] {
comp := getComputation()(ctx)()
if reader, err := either.Unwrap(comp); err == nil {
value := reader(cfg)
// ... more processing
}
// ... error handling
}
}
}
// With Sequence - point-free
var processWithConfig = F.Flow2(
getComputation,
SequenceReader[Config, Result],
)
```
## Examples
### Example 1: Database Query with Configuration
```go
type QueryConfig struct {
Timeout time.Duration
MaxRows int
}
type Database struct {
ConnectionString string
}
// Without Sequence
func executeQueryOld(cfg QueryConfig, db Database) ReaderIOResult[[]Row] {
return func(ctx context.Context) func() Either[error, []Row] {
return func() Either[error, []Row] {
// Must manually handle all parameters
// ...
}
}
}
// With Sequence
func makeQuery(ctx context.Context) func() Either[error, func(Database) []Row] {
return func() Either[error, func(Database) []Row] {
return Right[error](func(db Database) []Row {
// Implementation
return []Row{}
})
}
}
var executeQuery = F.Flow2(
makeQuery,
SequenceReader[Database, []Row],
)
// Usage
db := Database{ConnectionString: "localhost:5432"}
query := executeQuery(db)
result := query(ctx)()
```
### Example 2: Multi-Service Architecture
```go
type ServiceRegistry struct {
AuthService AuthService
UserService UserService
EmailService EmailService
}
// Create computations that depend on services
makeAuthCheck := func(ctx context.Context) func() Either[error, func(ServiceRegistry) bool] {
// ... implementation
}
makeSendEmail := func(ctx context.Context) func() Either[error, func(ServiceRegistry) error] {
// ... implementation
}
// Sequence them
authCheck := SequenceReader[ServiceRegistry, bool](makeAuthCheck)
sendEmail := SequenceReader[ServiceRegistry, error](makeSendEmail)
// Inject services once
registry := ServiceRegistry{ /* ... */ }
checkAuth := authCheck(registry)
sendMail := sendEmail(registry)
// Use with different contexts
if isAuth, _ := either.Unwrap(checkAuth(ctx1)()); isAuth {
sendMail(ctx2)()
}
```
### Example 3: Configuration-Driven Pipeline
```go
type PipelineConfig struct {
Stage1Config Stage1Config
Stage2Config Stage2Config
Stage3Config Stage3Config
}
// Define stages
stage1 := SequenceReader[Stage1Config, IntermediateResult1](makeStage1)
stage2 := SequenceReader[Stage2Config, IntermediateResult2](makeStage2)
stage3 := SequenceReader[Stage3Config, FinalResult](makeStage3)
// Build pipeline with configuration
func buildPipeline(cfg PipelineConfig) ReaderIOResult[FinalResult] {
return F.Pipe3(
stage1(cfg.Stage1Config),
Chain(func(r1 IntermediateResult1) ReaderIOResult[IntermediateResult2] {
return stage2(cfg.Stage2Config)
}),
Chain(func(r2 IntermediateResult2) ReaderIOResult[FinalResult] {
return stage3(cfg.Stage3Config)
}),
)
}
// Execute pipeline
cfg := loadPipelineConfig()
pipeline := buildPipeline(cfg)
result := pipeline(ctx)()
```
## Comparison: With and Without Sequence
### Without Sequence (Imperative Style)
```go
func processUser(userID string) ReaderIOResult[ProcessedUser] {
return func(ctx context.Context) func() Either[error, ProcessedUser] {
return func() Either[error, ProcessedUser] {
// Get database
dbComp := getDatabase()(ctx)()
if dbReader, err := either.Unwrap(dbComp); err != nil {
return Left[ProcessedUser](err)
}
db := dbReader(dbConfig)
// Get user
userComp := getUser(userID)(ctx)()
if userReader, err := either.Unwrap(userComp); err != nil {
return Left[ProcessedUser](err)
}
user := userReader(db)
// Process user
processComp := processUserData(user)(ctx)()
if processReader, err := either.Unwrap(processComp); err != nil {
return Left[ProcessedUser](err)
}
result := processReader(processingConfig)
return Right[error](result)
}
}
}
```
### With Sequence (Point-Free Style)
```go
var processUser = func(userID string) ReaderIOResult[ProcessedUser] {
return F.Pipe3(
getDatabase,
SequenceReader[DatabaseConfig, Database],
applyConfig(dbConfig),
Chain(func(db Database) ReaderIOResult[User] {
return F.Pipe2(
getUser(userID),
SequenceReader[Database, User],
applyDB(db),
)
}),
Chain(func(user User) ReaderIOResult[ProcessedUser] {
return F.Pipe2(
processUserData(user),
SequenceReader[ProcessingConfig, ProcessedUser],
applyConfig(processingConfig),
)
}),
)
}
```
## Key Takeaways
1. **Sequence functions flip parameter order** to enable partial application
2. **Dependencies come first**, making them easy to inject and test
3. **Point-free style** becomes natural and readable
4. **Composition** is enhanced through proper parameter ordering
5. **Reusability** increases as computations can be specialized early
6. **Testability** improves through easy dependency injection
7. **Separation of concerns** is clearer (configuration vs. execution)
8. **Performance benefit**: Eager construction (once) + lazy execution (many times) = efficiency for repeated operations
## When to Use Sequence
Use `Sequence*` functions when:
- ✅ You want to partially apply environment/configuration parameters
- ✅ You're building reusable computations with injected dependencies
- ✅ You need to test with different dependency implementations
- ✅ You're composing complex pipelines in point-free style
- ✅ You want to separate configuration from execution
- ✅ You're working with nested Reader-like structures
Don't use `Sequence*` when:
- ❌ The original parameter order is already optimal
- ❌ You're not doing any composition or partial application
- ❌ The added abstraction doesn't provide value
- ❌ The code is simpler without it
## Conclusion
The `Sequence*` functions are powerful tools for enabling point-free style programming in Go. By flipping the parameter order of nested monadic structures, they make it easy to:
- Partially apply dependencies
- Build composable pipelines
- Improve testability
- Write more declarative code
While they add a layer of abstraction, the benefits in terms of code reusability, testability, and composability make them invaluable for functional programming in Go.

View File

@@ -18,14 +18,13 @@ package readerioresult
import (
"context"
"github.com/IBM/fp-go/v2/context/readerio"
F "github.com/IBM/fp-go/v2/function"
"github.com/IBM/fp-go/v2/internal/apply"
"github.com/IBM/fp-go/v2/io"
"github.com/IBM/fp-go/v2/ioeither"
"github.com/IBM/fp-go/v2/ioresult"
L "github.com/IBM/fp-go/v2/optics/lens"
"github.com/IBM/fp-go/v2/reader"
"github.com/IBM/fp-go/v2/readerio"
RIOR "github.com/IBM/fp-go/v2/readerioresult"
"github.com/IBM/fp-go/v2/result"
)
@@ -96,7 +95,7 @@ func Bind[S1, S2, T any](
setter func(T) func(S1) S2,
f Kleisli[S1, T],
) Operator[S1, S2] {
return RIOR.Bind(setter, f)
return RIOR.Bind(setter, WithContextK(f))
}
// Let attaches the result of a computation to a context [S1] to produce a context [S2]
@@ -128,6 +127,13 @@ func BindTo[S1, T any](
return RIOR.BindTo[context.Context](setter)
}
//go:inline
func BindToP[S1, T any](
setter Prism[S1, T],
) Operator[T, S1] {
return BindTo(setter.ReverseGet)
}
// ApS attaches a value to a context [S1] to produce a context [S2] by considering
// the context and the value concurrently (using Applicative rather than Monad).
// This allows independent computations to be combined without one depending on the result of the other.
@@ -214,7 +220,7 @@ func ApS[S1, S2, T any](
//
//go:inline
func ApSL[S, T any](
lens L.Lens[S, T],
lens Lens[S, T],
fa ReaderIOResult[T],
) Operator[S, S] {
return ApS(lens.Set, fa)
@@ -253,10 +259,10 @@ func ApSL[S, T any](
//
//go:inline
func BindL[S, T any](
lens L.Lens[S, T],
lens Lens[S, T],
f Kleisli[T, T],
) Operator[S, S] {
return RIOR.BindL(lens, f)
return RIOR.BindL(lens, WithContextK(f))
}
// LetL is a variant of Let that uses a lens to focus on a specific part of the context.
@@ -289,8 +295,8 @@ func BindL[S, T any](
//
//go:inline
func LetL[S, T any](
lens L.Lens[S, T],
f func(T) T,
lens Lens[S, T],
f Endomorphism[T],
) Operator[S, S] {
return RIOR.LetL[context.Context](lens, f)
}
@@ -322,7 +328,7 @@ func LetL[S, T any](
//
//go:inline
func LetToL[S, T any](
lens L.Lens[S, T],
lens Lens[S, T],
b T,
) Operator[S, S] {
return RIOR.LetToL[context.Context](lens, b)
@@ -398,7 +404,7 @@ func BindReaderK[S1, S2, T any](
//go:inline
func BindReaderIOK[S1, S2, T any](
setter func(T) func(S1) S2,
f readerio.Kleisli[context.Context, S1, T],
f readerio.Kleisli[S1, T],
) Operator[S1, S2] {
return Bind(setter, F.Flow2(f, FromReaderIO[T]))
}
@@ -443,7 +449,7 @@ func BindResultK[S1, S2, T any](
//
//go:inline
func BindIOEitherKL[S, T any](
lens L.Lens[S, T],
lens Lens[S, T],
f ioresult.Kleisli[T, T],
) Operator[S, S] {
return BindL(lens, F.Flow2(f, FromIOEither[T]))
@@ -458,7 +464,7 @@ func BindIOEitherKL[S, T any](
//
//go:inline
func BindIOResultKL[S, T any](
lens L.Lens[S, T],
lens Lens[S, T],
f ioresult.Kleisli[T, T],
) Operator[S, S] {
return BindL(lens, F.Flow2(f, FromIOEither[T]))
@@ -474,7 +480,7 @@ func BindIOResultKL[S, T any](
//
//go:inline
func BindIOKL[S, T any](
lens L.Lens[S, T],
lens Lens[S, T],
f io.Kleisli[T, T],
) Operator[S, S] {
return BindL(lens, F.Flow2(f, FromIO[T]))
@@ -490,7 +496,7 @@ func BindIOKL[S, T any](
//
//go:inline
func BindReaderKL[S, T any](
lens L.Lens[S, T],
lens Lens[S, T],
f reader.Kleisli[context.Context, T, T],
) Operator[S, S] {
return BindL(lens, F.Flow2(f, FromReader[T]))
@@ -506,8 +512,8 @@ func BindReaderKL[S, T any](
//
//go:inline
func BindReaderIOKL[S, T any](
lens L.Lens[S, T],
f readerio.Kleisli[context.Context, T, T],
lens Lens[S, T],
f readerio.Kleisli[T, T],
) Operator[S, S] {
return BindL(lens, F.Flow2(f, FromReaderIO[T]))
}
@@ -627,7 +633,7 @@ func ApResultS[S1, S2, T any](
//
//go:inline
func ApIOEitherSL[S, T any](
lens L.Lens[S, T],
lens Lens[S, T],
fa IOResult[T],
) Operator[S, S] {
return F.Bind2nd(F.Flow2[ReaderIOResult[S], ioresult.Operator[S, S]], ioresult.ApSL(lens, fa))
@@ -642,7 +648,7 @@ func ApIOEitherSL[S, T any](
//
//go:inline
func ApIOResultSL[S, T any](
lens L.Lens[S, T],
lens Lens[S, T],
fa IOResult[T],
) Operator[S, S] {
return F.Bind2nd(F.Flow2[ReaderIOResult[S], ioresult.Operator[S, S]], ioresult.ApSL(lens, fa))
@@ -657,7 +663,7 @@ func ApIOResultSL[S, T any](
//
//go:inline
func ApIOSL[S, T any](
lens L.Lens[S, T],
lens Lens[S, T],
fa IO[T],
) Operator[S, S] {
return ApSL(lens, FromIO(fa))
@@ -672,7 +678,7 @@ func ApIOSL[S, T any](
//
//go:inline
func ApReaderSL[S, T any](
lens L.Lens[S, T],
lens Lens[S, T],
fa Reader[context.Context, T],
) Operator[S, S] {
return ApSL(lens, FromReader(fa))
@@ -687,7 +693,7 @@ func ApReaderSL[S, T any](
//
//go:inline
func ApReaderIOSL[S, T any](
lens L.Lens[S, T],
lens Lens[S, T],
fa ReaderIO[T],
) Operator[S, S] {
return ApSL(lens, FromReaderIO(fa))
@@ -702,7 +708,7 @@ func ApReaderIOSL[S, T any](
//
//go:inline
func ApEitherSL[S, T any](
lens L.Lens[S, T],
lens Lens[S, T],
fa Result[T],
) Operator[S, S] {
return ApSL(lens, FromEither(fa))
@@ -717,7 +723,7 @@ func ApEitherSL[S, T any](
//
//go:inline
func ApResultSL[S, T any](
lens L.Lens[S, T],
lens Lens[S, T],
fa Result[T],
) Operator[S, S] {
return ApSL(lens, FromResult(fa))

View File

@@ -203,9 +203,7 @@ func TestApS_EmptyState(t *testing.T) {
result := res(t.Context())()
assert.True(t, E.IsRight(result))
emptyOpt := E.ToOption(result)
assert.True(t, O.IsSome(emptyOpt))
empty, _ := O.Unwrap(emptyOpt)
assert.Equal(t, Empty{}, empty)
assert.Equal(t, O.Of(Empty{}), emptyOpt)
}
func TestApS_ChainedWithBind(t *testing.T) {

View File

@@ -16,11 +16,14 @@
package readerioresult
import (
F "github.com/IBM/fp-go/v2/function"
RIOR "github.com/IBM/fp-go/v2/readerioresult"
)
// Bracket makes sure that a resource is cleaned up in the event of an error. The release action is called regardless of
// whether the body action returns and error or not.
//
//go:inline
func Bracket[
A, B, ANY any](
@@ -28,5 +31,5 @@ func Bracket[
use Kleisli[A, B],
release func(A, Either[B]) ReaderIOResult[ANY],
) ReaderIOResult[B] {
return RIOR.Bracket(acquire, use, release)
return RIOR.Bracket(acquire, F.Flow2(use, WithContext), release)
}

View File

@@ -19,6 +19,7 @@ import (
"context"
CIOE "github.com/IBM/fp-go/v2/context/ioresult"
F "github.com/IBM/fp-go/v2/function"
"github.com/IBM/fp-go/v2/ioeither"
)
@@ -34,9 +35,53 @@ import (
// Returns a ReaderIOResult that checks for cancellation before executing.
func WithContext[A any](ma ReaderIOResult[A]) ReaderIOResult[A] {
return func(ctx context.Context) IOEither[A] {
if err := context.Cause(ctx); err != nil {
return ioeither.Left[A](err)
if ctx.Err() != nil {
return ioeither.Left[A](context.Cause(ctx))
}
return CIOE.WithContext(ctx, ma(ctx))
}
}
// WithContextK wraps a Kleisli arrow with context cancellation checking.
// This ensures that the computation checks for context cancellation before executing,
// providing a convenient way to add cancellation awareness to Kleisli arrows.
//
// This is particularly useful when composing multiple Kleisli arrows where each step
// should respect context cancellation.
//
// Type Parameters:
// - A: The input type of the Kleisli arrow
// - B: The output type of the Kleisli arrow
//
// Parameters:
// - f: The Kleisli arrow to wrap with context checking
//
// Returns:
// - A Kleisli arrow that checks for cancellation before executing
//
// Example:
//
// fetchUser := func(id int) ReaderIOResult[User] {
// return func(ctx context.Context) IOResult[User] {
// return func() Result[User] {
// // Long-running operation
// return result.Of(User{ID: id})
// }
// }
// }
//
// // Wrap with context checking
// safeFetch := WithContextK(fetchUser)
//
// // If context is cancelled, returns immediately without executing fetchUser
// ctx, cancel := context.WithCancel(context.Background())
// cancel() // Cancel immediately
// result := safeFetch(123)(ctx)() // Returns context.Canceled error
//
//go:inline
func WithContextK[A, B any](f Kleisli[A, B]) Kleisli[A, B] {
return F.Flow2(
f,
WithContext,
)
}

View File

@@ -0,0 +1,60 @@
package readerioresult
import (
"time"
"github.com/IBM/fp-go/v2/circuitbreaker"
"github.com/IBM/fp-go/v2/option"
"github.com/IBM/fp-go/v2/retry"
)
type (
ClosedState = circuitbreaker.ClosedState
Env[T any] = Pair[IORef[circuitbreaker.BreakerState], ReaderIOResult[T]]
CircuitBreaker[T any] = State[Env[T], ReaderIOResult[T]]
)
func MakeCircuitBreaker[T any](
currentTime IO[time.Time],
closedState ClosedState,
checkError option.Kleisli[error, error],
policy retry.RetryPolicy,
metrics circuitbreaker.Metrics,
) CircuitBreaker[T] {
return circuitbreaker.MakeCircuitBreaker[error, T](
Left,
ChainFirstIOK,
ChainFirstLeftIOK,
FromIO,
Flap,
Flatten,
currentTime,
closedState,
circuitbreaker.MakeCircuitBreakerError,
checkError,
policy,
metrics,
)
}
func MakeSingletonBreaker[T any](
currentTime IO[time.Time],
closedState ClosedState,
checkError option.Kleisli[error, error],
policy retry.RetryPolicy,
metrics circuitbreaker.Metrics,
) Operator[T, T] {
return circuitbreaker.MakeSingletonBreaker(
MakeCircuitBreaker[T](
currentTime,
closedState,
checkError,
policy,
metrics,
),
closedState,
)
}

View File

@@ -0,0 +1,246 @@
# Circuit Breaker Documentation
## Overview
The `circuitbreaker.go` file provides a circuit breaker implementation for the `readerioresult` package. A circuit breaker is a design pattern used to detect failures and prevent cascading failures in distributed systems by temporarily blocking operations that are likely to fail.
## Package
```go
package readerioresult
```
This is part of the `context/readerioresult` package, which provides functional programming abstractions for operations that:
- Depend on a `context.Context` (Reader aspect)
- Perform side effects (IO aspect)
- Can fail with an `error` (Result/Either aspect)
## Type Definitions
### ClosedState
```go
type ClosedState = circuitbreaker.ClosedState
```
A type alias for the circuit breaker's closed state. When the circuit is closed, requests are allowed to pass through normally. The closed state tracks success and failure counts to determine when to open the circuit.
### Env[T any]
```go
type Env[T any] = Pair[IORef[circuitbreaker.BreakerState], ReaderIOResult[T]]
```
The environment type for the circuit breaker state machine. It contains:
- `IORef[circuitbreaker.BreakerState]`: A mutable reference to the current breaker state
- `ReaderIOResult[T]`: The computation to be protected by the circuit breaker
### CircuitBreaker[T any]
```go
type CircuitBreaker[T any] = State[Env[T], ReaderIOResult[T]]
```
The main circuit breaker type. It's a state monad that:
- Takes an environment containing the breaker state and the protected computation
- Returns a new environment and a wrapped computation that respects the circuit breaker logic
## Functions
### MakeCircuitBreaker
```go
func MakeCircuitBreaker[T any](
currentTime IO[time.Time],
closedState ClosedState,
checkError option.Kleisli[error, error],
policy retry.RetryPolicy,
logger io.Kleisli[string, string],
) CircuitBreaker[T]
```
Creates a new circuit breaker with the specified configuration.
#### Parameters
- **currentTime** `IO[time.Time]`: A function that returns the current time. This can be a virtual timer for testing purposes, allowing you to control time progression in tests.
- **closedState** `ClosedState`: The initial closed state configuration. This defines:
- Maximum number of failures before opening the circuit
- Time window for counting failures
- Other closed state parameters
- **checkError** `option.Kleisli[error, error]`: A function that determines whether an error should be counted as a failure. Returns:
- `Some(error)`: The error should be counted as a failure
- `None`: The error should be ignored (not counted as a failure)
This allows you to distinguish between transient errors (that should trigger circuit breaking) and permanent errors (that shouldn't).
- **policy** `retry.RetryPolicy`: The retry policy that determines:
- How long to wait before attempting to close the circuit (reset time)
- Exponential backoff or other delay strategies
- Maximum number of retry attempts
- **logger** `io.Kleisli[string, string]`: A logging function for circuit breaker events. Receives log messages and performs side effects (like writing to a log file or console).
#### Returns
A `CircuitBreaker[T]` that wraps computations with circuit breaker logic.
#### Circuit Breaker States
The circuit breaker operates in three states:
1. **Closed**: Normal operation. Requests pass through. Failures are counted.
- If failure threshold is exceeded, transitions to Open state
2. **Open**: Circuit is broken. Requests fail immediately without executing.
- After reset time expires, transitions to Half-Open state
3. **Half-Open** (Canary): Testing if the service has recovered.
- Allows a single test request (canary request)
- If canary succeeds, transitions to Closed state
- If canary fails, transitions back to Open state with extended reset time
#### Implementation Details
The function delegates to the generic `circuitbreaker.MakeCircuitBreaker` function, providing the necessary type-specific operations:
- **Left**: Creates a failed computation from an error
- **ChainFirstIOK**: Chains an IO operation that runs for side effects on success
- **ChainFirstLeftIOK**: Chains an IO operation that runs for side effects on failure
- **FromIO**: Lifts an IO computation into ReaderIOResult
- **Flap**: Applies a computation to a function
- **Flatten**: Flattens nested ReaderIOResult structures
These operations allow the generic circuit breaker to work with the `ReaderIOResult` monad.
## Usage Example
```go
import (
"context"
"fmt"
"time"
"github.com/IBM/fp-go/v2/circuitbreaker"
"github.com/IBM/fp-go/v2/context/readerioresult"
"github.com/IBM/fp-go/v2/io"
"github.com/IBM/fp-go/v2/ioref"
"github.com/IBM/fp-go/v2/option"
"github.com/IBM/fp-go/v2/retry"
)
// Create a circuit breaker configuration
func createCircuitBreaker() readerioresult.CircuitBreaker[string] {
// Use real time
currentTime := func() time.Time { return time.Now() }
// Configure closed state: open after 5 failures in 10 seconds
closedState := circuitbreaker.MakeClosedState(5, 10*time.Second)
// Check all errors (count all as failures)
checkError := func(err error) option.Option[error] {
return option.Some(err)
}
// Retry policy: exponential backoff with max 5 retries
policy := retry.Monoid.Concat(
retry.LimitRetries(5),
retry.ExponentialBackoff(100*time.Millisecond),
)
// Simple logger
logger := func(msg string) io.IO[string] {
return func() string {
fmt.Println("Circuit Breaker:", msg)
return msg
}
}
return readerioresult.MakeCircuitBreaker[string](
currentTime,
closedState,
checkError,
policy,
logger,
)
}
// Use the circuit breaker
func main() {
cb := createCircuitBreaker()
// Create initial state
stateRef := ioref.NewIORef(circuitbreaker.InitialState())
// Your protected operation
operation := func(ctx context.Context) readerioresult.IOResult[string] {
return func() readerioresult.Result[string] {
// Your actual operation here
return result.Of("success")
}
}
// Apply circuit breaker
env := pair.MakePair(stateRef, operation)
result := cb(env)
// Execute the protected operation
ctx := context.Background()
protectedOp := pair.Tail(result)
outcome := protectedOp(ctx)()
}
```
## Testing with Virtual Timer
For testing, you can provide a virtual timer instead of `time.Now()`:
```go
// Virtual timer for testing
type VirtualTimer struct {
current time.Time
}
func (vt *VirtualTimer) Now() time.Time {
return vt.current
}
func (vt *VirtualTimer) Advance(d time.Duration) {
vt.current = vt.current.Add(d)
}
// Use in tests
func TestCircuitBreaker(t *testing.T) {
vt := &VirtualTimer{current: time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC)}
currentTime := func() time.Time { return vt.Now() }
cb := readerioresult.MakeCircuitBreaker[string](
currentTime,
closedState,
checkError,
policy,
logger,
)
// Test circuit breaker behavior
// Advance time as needed
vt.Advance(5 * time.Second)
}
```
## Related Types
- `circuitbreaker.BreakerState`: The internal state of the circuit breaker (closed or open)
- `circuitbreaker.ClosedState`: Configuration for the closed state
- `retry.RetryPolicy`: Policy for retry delays and limits
- `option.Kleisli[error, error]`: Function type for error checking
- `io.Kleisli[string, string]`: Function type for logging
## See Also
- `circuitbreaker` package: Generic circuit breaker implementation
- `retry` package: Retry policies and strategies
- `readerioresult` package: Core ReaderIOResult monad operations

View File

@@ -0,0 +1,974 @@
// Copyright (c) 2023 - 2025 IBM Corp.
// All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package readerioresult
import (
"errors"
"log"
"sync"
"testing"
"time"
"github.com/IBM/fp-go/v2/array"
"github.com/IBM/fp-go/v2/circuitbreaker"
"github.com/IBM/fp-go/v2/ioref"
"github.com/IBM/fp-go/v2/option"
"github.com/IBM/fp-go/v2/pair"
"github.com/IBM/fp-go/v2/result"
"github.com/IBM/fp-go/v2/retry"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// VirtualTimer provides a controllable time source for testing
type VirtualTimer struct {
mu sync.Mutex
current time.Time
}
// NewVirtualTimer creates a new virtual timer starting at the given time
func NewVirtualTimer(start time.Time) *VirtualTimer {
return &VirtualTimer{current: start}
}
// Now returns the current virtual time
func (vt *VirtualTimer) Now() time.Time {
vt.mu.Lock()
defer vt.mu.Unlock()
return vt.current
}
// Advance moves the virtual time forward by the given duration
func (vt *VirtualTimer) Advance(d time.Duration) {
vt.mu.Lock()
defer vt.mu.Unlock()
vt.current = vt.current.Add(d)
}
// Set sets the virtual time to a specific value
func (vt *VirtualTimer) Set(t time.Time) {
vt.mu.Lock()
defer vt.mu.Unlock()
vt.current = t
}
// Helper function to create a test logger that collects messages
func testMetrics(_ *[]string) circuitbreaker.Metrics {
return circuitbreaker.MakeMetricsFromLogger("testMetrics", log.Default())
}
// Helper function to create a simple closed state
func testCBClosedState() circuitbreaker.ClosedState {
return circuitbreaker.MakeClosedStateCounter(3)
}
// Helper function to create a test retry policy
func testCBRetryPolicy() retry.RetryPolicy {
return retry.Monoid.Concat(
retry.LimitRetries(3),
retry.ExponentialBackoff(100*time.Millisecond),
)
}
// Helper function that checks all errors
func checkAllErrors(err error) option.Option[error] {
return option.Some(err)
}
// Helper function that ignores specific errors
func ignoreSpecificError(ignoredMsg string) func(error) option.Option[error] {
return func(err error) option.Option[error] {
if err.Error() == ignoredMsg {
return option.None[error]()
}
return option.Some(err)
}
}
// TestCircuitBreaker_SuccessfulOperation tests that successful operations
// pass through the circuit breaker without issues
func TestCircuitBreaker_SuccessfulOperation(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC))
var logMessages []string
cb := MakeCircuitBreaker[string](
vt.Now,
testCBClosedState(),
checkAllErrors,
testCBRetryPolicy(),
testMetrics(&logMessages),
)
// Create initial state
stateRef := circuitbreaker.MakeClosedIORef(testCBClosedState())()
// Successful operation
operation := Of("success")
// Apply circuit breaker
env := pair.MakePair(stateRef, operation)
resultEnv := cb(env)
// Execute
ctx := t.Context()
protectedOp := pair.Tail(resultEnv)
outcome := protectedOp(ctx)()
assert.Equal(t, result.Of("success"), outcome)
}
// TestCircuitBreaker_SingleFailure tests that a single failure is handled
// but doesn't open the circuit
func TestCircuitBreaker_SingleFailure(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC))
var logMessages []string
cb := MakeCircuitBreaker[string](
vt.Now,
testCBClosedState(),
checkAllErrors,
testCBRetryPolicy(),
testMetrics(&logMessages),
)
stateRef := circuitbreaker.MakeClosedIORef(testCBClosedState())()
expError := errors.New("operation failed")
// Failing operation
operation := Left[string](expError)
env := pair.MakePair(stateRef, operation)
resultEnv := cb(env)
ctx := t.Context()
protectedOp := pair.Tail(resultEnv)
outcome := protectedOp(ctx)()
assert.Equal(t, result.Left[string](expError), outcome)
// Circuit should still be closed after one failure
state := ioref.Read(stateRef)()
assert.True(t, circuitbreaker.IsClosed(state))
}
// TestCircuitBreaker_OpensAfterThreshold tests that the circuit opens
// after exceeding the failure threshold
func TestCircuitBreaker_OpensAfterThreshold(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC))
var logMessages []string
cb := MakeCircuitBreaker[string](
vt.Now,
testCBClosedState(), // Opens after 3 failures
checkAllErrors,
testCBRetryPolicy(),
testMetrics(&logMessages),
)
stateRef := circuitbreaker.MakeClosedIORef(testCBClosedState())()
expError := errors.New("operation failed")
// Failing operation
operation := Left[string](expError)
ctx := t.Context()
// Execute 3 failures to open the circuit
for range 3 {
env := pair.MakePair(stateRef, operation)
resultEnv := cb(env)
protectedOp := pair.Tail(resultEnv)
outcome := protectedOp(ctx)()
assert.Equal(t, result.Left[string](expError), outcome)
}
// Circuit should now be open
state := ioref.Read(stateRef)()
assert.True(t, circuitbreaker.IsOpen(state))
// Next request should fail immediately with circuit breaker error
env := pair.MakePair(stateRef, operation)
resultEnv := cb(env)
protectedOp := pair.Tail(resultEnv)
outcome := protectedOp(ctx)()
assert.True(t, result.IsLeft(outcome))
_, err := result.Unwrap(outcome)
var cbErr *circuitbreaker.CircuitBreakerError
assert.ErrorAs(t, err, &cbErr)
}
// TestCircuitBreaker_HalfOpenAfterResetTime tests that the circuit
// transitions to half-open state after the reset time
func TestCircuitBreaker_HalfOpenAfterResetTime(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC))
var logMessages []string
cb := MakeCircuitBreaker[string](
vt.Now,
testCBClosedState(),
checkAllErrors,
testCBRetryPolicy(),
testMetrics(&logMessages),
)
stateRef := circuitbreaker.MakeClosedIORef(testCBClosedState())()
expError := errors.New("operation failed")
// Failing operation
failingOp := Left[string](expError)
ctx := t.Context()
// Open the circuit with 3 failures
for range 3 {
env := pair.MakePair(stateRef, failingOp)
resultEnv := cb(env)
protectedOp := pair.Tail(resultEnv)
outcome := protectedOp(ctx)()
assert.Equal(t, result.Left[string](expError), outcome)
}
// Verify circuit is open
state := ioref.Read(stateRef)()
assert.True(t, circuitbreaker.IsOpen(state))
// Advance time past the reset time (exponential backoff starts at 100ms)
vt.Advance(200 * time.Millisecond)
// Now create a successful operation for the canary request
successOp := Of("success")
// Next request should be a canary request
env := pair.MakePair(stateRef, successOp)
resultEnv := cb(env)
protectedOp := pair.Tail(resultEnv)
outcome := protectedOp(ctx)()
// Canary should succeed
assert.Equal(t, result.Of("success"), outcome)
// Circuit should now be closed again
state = ioref.Read(stateRef)()
assert.True(t, circuitbreaker.IsClosed(state))
}
// TestCircuitBreaker_CanaryFailureExtendsOpenTime tests that a failed
// canary request extends the open time
func TestCircuitBreaker_CanaryFailureExtendsOpenTime(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC))
var logMessages []string
cb := MakeCircuitBreaker[string](
vt.Now,
testCBClosedState(),
checkAllErrors,
testCBRetryPolicy(),
testMetrics(&logMessages),
)
stateRef := circuitbreaker.MakeClosedIORef(testCBClosedState())()
expError := errors.New("operation failed")
// Failing operation
failingOp := Left[string](expError)
ctx := t.Context()
// Open the circuit
for range 3 {
env := pair.MakePair(stateRef, failingOp)
resultEnv := cb(env)
protectedOp := pair.Tail(resultEnv)
outcome := protectedOp(ctx)()
assert.Equal(t, result.Left[string](expError), outcome)
}
// Advance time to trigger canary
vt.Advance(200 * time.Millisecond)
// Canary request fails
env := pair.MakePair(stateRef, failingOp)
resultEnv := cb(env)
protectedOp := pair.Tail(resultEnv)
outcome := protectedOp(ctx)()
assert.True(t, result.IsLeft(outcome))
// Circuit should still be open
state := ioref.Read(stateRef)()
assert.True(t, circuitbreaker.IsOpen(state))
// Immediate next request should fail with circuit breaker error
env = pair.MakePair(stateRef, failingOp)
resultEnv = cb(env)
protectedOp = pair.Tail(resultEnv)
outcome = protectedOp(ctx)()
assert.True(t, result.IsLeft(outcome))
_, err := result.Unwrap(outcome)
var cbErr *circuitbreaker.CircuitBreakerError
assert.ErrorAs(t, err, &cbErr)
}
// TestCircuitBreaker_IgnoredErrorsDoNotCount tests that errors filtered
// by checkError don't count toward opening the circuit
func TestCircuitBreaker_IgnoredErrorsDoNotCount(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC))
var logMessages []string
// Ignore "ignorable error"
checkError := ignoreSpecificError("ignorable error")
cb := MakeCircuitBreaker[string](
vt.Now,
testCBClosedState(),
checkError,
testCBRetryPolicy(),
testMetrics(&logMessages),
)
stateRef := circuitbreaker.MakeClosedIORef(testCBClosedState())()
ctx := t.Context()
ignorableError := errors.New("ignorable error")
// Execute 5 ignorable errors
ignorableOp := Left[string](ignorableError)
for range 5 {
env := pair.MakePair(stateRef, ignorableOp)
resultEnv := cb(env)
protectedOp := pair.Tail(resultEnv)
outcome := protectedOp(ctx)()
assert.Equal(t, result.Left[string](ignorableError), outcome)
}
// Circuit should still be closed
state := ioref.Read(stateRef)()
assert.True(t, circuitbreaker.IsClosed(state))
realError := errors.New("real error")
// Now send a real error
realErrorOp := Left[string](realError)
env := pair.MakePair(stateRef, realErrorOp)
resultEnv := cb(env)
protectedOp := pair.Tail(resultEnv)
outcome := protectedOp(ctx)()
assert.Equal(t, result.Left[string](realError), outcome)
// Circuit should still be closed (only 1 counted error)
state = ioref.Read(stateRef)()
assert.True(t, circuitbreaker.IsClosed(state))
}
// TestCircuitBreaker_MixedSuccessAndFailure tests the circuit behavior
// with a mix of successful and failed operations
func TestCircuitBreaker_MixedSuccessAndFailure(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC))
var logMessages []string
cb := MakeCircuitBreaker[string](
vt.Now,
testCBClosedState(),
checkAllErrors,
testCBRetryPolicy(),
testMetrics(&logMessages),
)
stateRef := circuitbreaker.MakeClosedIORef(testCBClosedState())()
ctx := t.Context()
successOp := Of("success")
expError := errors.New("failure")
failOp := Left[string](expError)
// Pattern: fail, fail, success, fail
ops := array.From(failOp, failOp, successOp, failOp)
for _, op := range ops {
env := pair.MakePair(stateRef, op)
resultEnv := cb(env)
protectedOp := pair.Tail(resultEnv)
_ = protectedOp(ctx)()
}
// Circuit should still be closed (success resets the count)
state := ioref.Read(stateRef)()
assert.True(t, circuitbreaker.IsClosed(state))
}
// TestCircuitBreaker_ConcurrentOperations tests that the circuit breaker
// handles concurrent operations correctly
func TestCircuitBreaker_ConcurrentOperations(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC))
var logMessages []string
cb := MakeCircuitBreaker[int](
vt.Now,
testCBClosedState(),
checkAllErrors,
testCBRetryPolicy(),
testMetrics(&logMessages),
)
stateRef := circuitbreaker.MakeClosedIORef(testCBClosedState())()
ctx := t.Context()
var wg sync.WaitGroup
results := make([]Result[int], 10)
// Launch 10 concurrent operations
for i := range 10 {
wg.Add(1)
go func(idx int) {
defer wg.Done()
op := Of(idx)
env := pair.MakePair(stateRef, op)
resultEnv := cb(env)
protectedOp := pair.Tail(resultEnv)
results[idx] = protectedOp(ctx)()
}(i)
}
wg.Wait()
// All operations should succeed
for i, res := range results {
assert.True(t, result.IsRight(res), "Operation %d should succeed", i)
}
}
// TestCircuitBreaker_DifferentTypes tests that the circuit breaker works
// with different result types
func TestCircuitBreaker_DifferentTypes(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC))
var logMessages []string
// Test with int
cbInt := MakeCircuitBreaker[int](
vt.Now,
testCBClosedState(),
checkAllErrors,
testCBRetryPolicy(),
testMetrics(&logMessages),
)
stateRefInt := circuitbreaker.MakeClosedIORef(testCBClosedState())()
opInt := Of(42)
ctx := t.Context()
envInt := pair.MakePair(stateRefInt, opInt)
resultEnvInt := cbInt(envInt)
protectedOpInt := pair.Tail(resultEnvInt)
outcomeInt := protectedOpInt(ctx)()
assert.Equal(t, result.Of(42), outcomeInt)
// Test with struct
type User struct {
ID int
Name string
}
cbUser := MakeCircuitBreaker[User](
vt.Now,
testCBClosedState(),
checkAllErrors,
testCBRetryPolicy(),
testMetrics(&logMessages),
)
stateRefUser := circuitbreaker.MakeClosedIORef(testCBClosedState())()
opUser := Of(User{ID: 1, Name: "Alice"})
envUser := pair.MakePair(stateRefUser, opUser)
resultEnvUser := cbUser(envUser)
protectedOpUser := pair.Tail(resultEnvUser)
outcomeUser := protectedOpUser(ctx)()
require.Equal(t, result.Of(User{ID: 1, Name: "Alice"}), outcomeUser)
}
// TestCircuitBreaker_VirtualTimerAdvancement tests that the virtual timer
// correctly controls time-based behavior
func TestCircuitBreaker_VirtualTimerAdvancement(t *testing.T) {
startTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
vt := NewVirtualTimer(startTime)
// Verify initial time
assert.Equal(t, startTime, vt.Now())
// Advance by 1 hour
vt.Advance(1 * time.Hour)
assert.Equal(t, startTime.Add(1*time.Hour), vt.Now())
// Advance by 30 minutes
vt.Advance(30 * time.Minute)
assert.Equal(t, startTime.Add(90*time.Minute), vt.Now())
// Set to specific time
newTime := time.Date(2024, 6, 15, 10, 30, 0, 0, time.UTC)
vt.Set(newTime)
assert.Equal(t, newTime, vt.Now())
}
// TestCircuitBreaker_InitialState tests that the circuit starts in closed state
func TestCircuitBreaker_InitialState(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC))
var logMessages []string
cb := MakeCircuitBreaker[string](
vt.Now,
testCBClosedState(),
checkAllErrors,
testCBRetryPolicy(),
testMetrics(&logMessages),
)
stateRef := circuitbreaker.MakeClosedIORef(testCBClosedState())()
// Check initial state is closed
state := ioref.Read(stateRef)()
assert.True(t, circuitbreaker.IsClosed(state), "Circuit should start in closed state")
// First operation should execute normally
op := Of("first operation")
ctx := t.Context()
env := pair.MakePair(stateRef, op)
resultEnv := cb(env)
protectedOp := pair.Tail(resultEnv)
outcome := protectedOp(ctx)()
assert.Equal(t, result.Of("first operation"), outcome)
}
// TestCircuitBreaker_ErrorMessageFormat tests that circuit breaker errors
// have appropriate error messages
func TestCircuitBreaker_ErrorMessageFormat(t *testing.T) {
vt := NewVirtualTimer(time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC))
var logMessages []string
cb := MakeCircuitBreaker[string](
vt.Now,
testCBClosedState(),
checkAllErrors,
testCBRetryPolicy(),
testMetrics(&logMessages),
)
stateRef := circuitbreaker.MakeClosedIORef(testCBClosedState())()
ctx := t.Context()
expError := errors.New("service unavailable")
failOp := Left[string](expError)
// Open the circuit
for range 3 {
env := pair.MakePair(stateRef, failOp)
resultEnv := cb(env)
protectedOp := pair.Tail(resultEnv)
_ = protectedOp(ctx)()
}
// Next request should fail with circuit breaker error
env := pair.MakePair(stateRef, failOp)
resultEnv := cb(env)
protectedOp := pair.Tail(resultEnv)
outcome := protectedOp(ctx)()
assert.True(t, result.IsLeft[string](outcome))
// Error message should indicate circuit breaker is open
_, err := result.Unwrap(outcome)
errMsg := err.Error()
assert.Contains(t, errMsg, "circuit", "Error should mention circuit breaker")
}
// RequestSpec defines a virtual request with timing and outcome information
type RequestSpec struct {
ID int // Unique identifier for the request
StartTime time.Duration // Virtual start time relative to test start
Duration time.Duration // How long the request takes to execute
ShouldFail bool // Whether this request should fail
}
// RequestResult captures the outcome of a request execution
type RequestResult struct {
ID int
StartTime time.Time
EndTime time.Time
Success bool
Error error
CircuitBreakerError bool // True if failed due to circuit breaker being open
}
// TestCircuitBreaker_ConcurrentBatchWithThresholdExceeded tests a complex
// concurrent scenario where:
// 1. Initial requests succeed
// 2. A batch of failures exceeds the threshold, opening the circuit
// 3. Subsequent requests fail immediately due to open circuit
// 4. After timeout, a canary request succeeds
// 5. Following requests succeed again
func TestCircuitBreaker_ConcurrentBatchWithThresholdExceeded(t *testing.T) {
startTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
vt := NewVirtualTimer(startTime)
var logMessages []string
// Circuit opens after 3 failures, with exponential backoff starting at 100ms
cb := MakeCircuitBreaker[string](
vt.Now,
testCBClosedState(), // Opens after 3 failures
checkAllErrors,
testCBRetryPolicy(), // 100ms initial backoff
testMetrics(&logMessages),
)
stateRef := circuitbreaker.MakeClosedIORef(testCBClosedState())()
ctx := t.Context()
// Define the request sequence
// Phase 1: Initial successes (0-100ms)
// Phase 2: Failures that exceed threshold (100-200ms) - should open circuit
// Phase 3: Requests during open circuit (200-300ms) - should fail immediately
// Phase 4: After timeout (400ms+) - canary succeeds, then more successes
requests := []RequestSpec{
// Phase 1: Initial successful requests
{ID: 1, StartTime: 0 * time.Millisecond, Duration: 10 * time.Millisecond, ShouldFail: false},
{ID: 2, StartTime: 20 * time.Millisecond, Duration: 10 * time.Millisecond, ShouldFail: false},
// Phase 2: Sequential failures that exceed threshold (3 failures)
{ID: 3, StartTime: 100 * time.Millisecond, Duration: 5 * time.Millisecond, ShouldFail: true},
{ID: 4, StartTime: 110 * time.Millisecond, Duration: 5 * time.Millisecond, ShouldFail: true},
{ID: 5, StartTime: 120 * time.Millisecond, Duration: 5 * time.Millisecond, ShouldFail: true},
{ID: 6, StartTime: 130 * time.Millisecond, Duration: 5 * time.Millisecond, ShouldFail: true},
// Phase 3: Requests during open circuit - should fail with circuit breaker error
{ID: 7, StartTime: 200 * time.Millisecond, Duration: 5 * time.Millisecond, ShouldFail: false},
{ID: 8, StartTime: 210 * time.Millisecond, Duration: 5 * time.Millisecond, ShouldFail: false},
{ID: 9, StartTime: 220 * time.Millisecond, Duration: 5 * time.Millisecond, ShouldFail: false},
// Phase 4: After reset timeout (100ms backoff from last failure at ~125ms = ~225ms)
// Wait longer to ensure we're past the reset time
{ID: 10, StartTime: 400 * time.Millisecond, Duration: 5 * time.Millisecond, ShouldFail: false}, // Canary succeeds
{ID: 11, StartTime: 410 * time.Millisecond, Duration: 5 * time.Millisecond, ShouldFail: false},
{ID: 12, StartTime: 420 * time.Millisecond, Duration: 5 * time.Millisecond, ShouldFail: false},
}
results := make([]RequestResult, len(requests))
// Execute requests sequentially but model them as if they were concurrent
// by advancing the virtual timer to each request's start time
for i, req := range requests {
// Set virtual time to request start time
vt.Set(startTime.Add(req.StartTime))
// Create the operation based on spec
var op ReaderIOResult[string]
if req.ShouldFail {
op = Left[string](errors.New("operation failed"))
} else {
op = Of("success")
}
// Apply circuit breaker
env := pair.MakePair(stateRef, op)
resultEnv := cb(env)
protectedOp := pair.Tail(resultEnv)
// Record start time
execStartTime := vt.Now()
// Execute the operation
outcome := protectedOp(ctx)()
// Advance time by operation duration
vt.Advance(req.Duration)
execEndTime := vt.Now()
// Analyze the result
isSuccess := result.IsRight(outcome)
var err error
var isCBError bool
if !isSuccess {
_, err = result.Unwrap(outcome)
var cbErr *circuitbreaker.CircuitBreakerError
isCBError = errors.As(err, &cbErr)
}
results[i] = RequestResult{
ID: req.ID,
StartTime: execStartTime,
EndTime: execEndTime,
Success: isSuccess,
Error: err,
CircuitBreakerError: isCBError,
}
}
// Verify Phase 1: Initial requests should succeed
assert.True(t, results[0].Success, "Request 1 should succeed")
assert.True(t, results[1].Success, "Request 2 should succeed")
// Verify Phase 2: Failures should be recorded (first 3 fail with actual error)
// The 4th might fail with CB error if circuit opened fast enough
assert.False(t, results[2].Success, "Request 3 should fail")
assert.False(t, results[3].Success, "Request 4 should fail")
assert.False(t, results[4].Success, "Request 5 should fail")
// At least the first 3 failures should be actual operation failures, not CB errors
actualFailures := 0
for i := 2; i <= 4; i++ {
if !results[i].CircuitBreakerError {
actualFailures++
}
}
assert.GreaterOrEqual(t, actualFailures, 3, "At least 3 actual operation failures should occur")
// Verify Phase 3: Requests during open circuit should fail with circuit breaker error
for i := 6; i <= 8; i++ {
assert.False(t, results[i].Success, "Request %d should fail during open circuit", results[i].ID)
assert.True(t, results[i].CircuitBreakerError, "Request %d should fail with circuit breaker error", results[i].ID)
}
// Verify Phase 4: After timeout, canary and subsequent requests should succeed
assert.True(t, results[9].Success, "Request 10 (canary) should succeed")
assert.True(t, results[10].Success, "Request 11 should succeed after circuit closes")
assert.True(t, results[11].Success, "Request 12 should succeed after circuit closes")
// Verify final state is closed
finalState := ioref.Read(stateRef)()
assert.True(t, circuitbreaker.IsClosed(finalState), "Circuit should be closed at the end")
// Log summary for debugging
t.Logf("Test completed with %d requests", len(results))
successCount := 0
cbErrorCount := 0
actualErrorCount := 0
for _, r := range results {
if r.Success {
successCount++
} else if r.CircuitBreakerError {
cbErrorCount++
} else {
actualErrorCount++
}
}
t.Logf("Summary: %d successes, %d circuit breaker errors, %d actual errors",
successCount, cbErrorCount, actualErrorCount)
}
// TestCircuitBreaker_ConcurrentHighLoad tests circuit breaker behavior
// under high concurrent load with mixed success/failure patterns
func TestCircuitBreaker_ConcurrentHighLoad(t *testing.T) {
startTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
vt := NewVirtualTimer(startTime)
var logMessages []string
cb := MakeCircuitBreaker[int](
vt.Now,
testCBClosedState(),
checkAllErrors,
testCBRetryPolicy(),
testMetrics(&logMessages),
)
stateRef := circuitbreaker.MakeClosedIORef(testCBClosedState())()
ctx := t.Context()
// Create a large batch of 50 requests
// Pattern: success, success, fail, fail, fail, fail, success, success, ...
// This ensures we have initial successes, then failures to open circuit,
// then more requests that hit the open circuit
numRequests := 50
results := make([]bool, numRequests)
cbErrors := make([]bool, numRequests)
// Execute requests with controlled timing
for i := range numRequests {
// Advance time slightly for each request
vt.Advance(10 * time.Millisecond)
// Pattern: 2 success, 4 failures, repeat
// This ensures we exceed the threshold (3 failures) early on
shouldFail := (i%6) >= 2 && (i%6) < 6
var op ReaderIOResult[int]
if shouldFail {
op = Left[int](errors.New("simulated failure"))
} else {
op = Of(i)
}
env := pair.MakePair(stateRef, op)
resultEnv := cb(env)
protectedOp := pair.Tail(resultEnv)
outcome := protectedOp(ctx)()
results[i] = result.IsRight(outcome)
if !results[i] {
_, err := result.Unwrap(outcome)
var cbErr *circuitbreaker.CircuitBreakerError
cbErrors[i] = errors.As(err, &cbErr)
}
}
// Count outcomes
successCount := 0
failureCount := 0
cbErrorCount := 0
for i := range numRequests {
if results[i] {
successCount++
} else {
failureCount++
if cbErrors[i] {
cbErrorCount++
}
}
}
t.Logf("High load test: %d total requests", numRequests)
t.Logf("Results: %d successes, %d failures (%d circuit breaker errors)",
successCount, failureCount, cbErrorCount)
// Verify that circuit breaker activated (some requests failed due to open circuit)
assert.Greater(t, cbErrorCount, 0, "Circuit breaker should have opened and blocked some requests")
// Verify that not all requests failed (some succeeded before circuit opened)
assert.Greater(t, successCount, 0, "Some requests should have succeeded")
}
// TestCircuitBreaker_TrueConcurrentRequests tests actual concurrent execution
// with proper synchronization
func TestCircuitBreaker_TrueConcurrentRequests(t *testing.T) {
startTime := time.Date(2024, 1, 1, 12, 0, 0, 0, time.UTC)
vt := NewVirtualTimer(startTime)
var logMessages []string
cb := MakeCircuitBreaker[int](
vt.Now,
testCBClosedState(),
checkAllErrors,
testCBRetryPolicy(),
testMetrics(&logMessages),
)
stateRef := circuitbreaker.MakeClosedIORef(testCBClosedState())()
ctx := t.Context()
// Launch 20 concurrent requests
numRequests := 20
var wg sync.WaitGroup
results := make([]bool, numRequests)
cbErrors := make([]bool, numRequests)
// First, send some successful requests
for i := range 5 {
op := Of(i)
env := pair.MakePair(stateRef, op)
resultEnv := cb(env)
protectedOp := pair.Tail(resultEnv)
outcome := protectedOp(ctx)()
results[i] = result.IsRight(outcome)
}
// Now send concurrent failures to open the circuit
for i := 5; i < 10; i++ {
wg.Add(1)
go func(idx int) {
defer wg.Done()
op := Left[int](errors.New("concurrent failure"))
env := pair.MakePair(stateRef, op)
resultEnv := cb(env)
protectedOp := pair.Tail(resultEnv)
outcome := protectedOp(ctx)()
results[idx] = result.IsRight(outcome)
if !results[idx] {
_, err := result.Unwrap(outcome)
var cbErr *circuitbreaker.CircuitBreakerError
cbErrors[idx] = errors.As(err, &cbErr)
}
}(i)
}
wg.Wait()
// Now send more requests that should hit the open circuit
for i := 10; i < numRequests; i++ {
op := Of(i)
env := pair.MakePair(stateRef, op)
resultEnv := cb(env)
protectedOp := pair.Tail(resultEnv)
outcome := protectedOp(ctx)()
results[i] = result.IsRight(outcome)
if !results[i] {
_, err := result.Unwrap(outcome)
var cbErr *circuitbreaker.CircuitBreakerError
cbErrors[i] = errors.As(err, &cbErr)
}
}
// Count outcomes
successCount := 0
failureCount := 0
cbErrorCount := 0
for i := range numRequests {
if results[i] {
successCount++
} else {
failureCount++
if cbErrors[i] {
cbErrorCount++
}
}
}
t.Logf("Concurrent test: %d total requests", numRequests)
t.Logf("Results: %d successes, %d failures (%d circuit breaker errors)",
successCount, failureCount, cbErrorCount)
// Verify initial successes
assert.Equal(t, 5, successCount, "First 5 requests should succeed")
// Verify that circuit breaker opened and blocked some requests
assert.Greater(t, cbErrorCount, 0, "Circuit breaker should have opened and blocked some requests")
}

View File

@@ -0,0 +1,63 @@
package readerioresult
import "github.com/IBM/fp-go/v2/io"
// ChainConsumer chains a consumer function into a ReaderIOResult computation, discarding the original value.
// This is useful for performing side effects (like logging or metrics) that consume a value
// but don't produce a meaningful result. The computation continues with an empty struct.
//
// Type Parameters:
// - A: The type of value to consume
//
// Parameters:
// - c: A consumer function that performs side effects on the value
//
// Returns:
// - An Operator that chains the consumer and returns struct{}
//
// Example:
//
// logUser := func(u User) {
// log.Printf("Processing user: %s", u.Name)
// }
//
// pipeline := F.Pipe2(
// fetchUser(123),
// ChainConsumer(logUser),
// )
//
//go:inline
func ChainConsumer[A any](c Consumer[A]) Operator[A, struct{}] {
return ChainIOK(io.FromConsumer(c))
}
// ChainFirstConsumer chains a consumer function into a ReaderIOResult computation, preserving the original value.
// This is useful for performing side effects (like logging or metrics) while passing the value through unchanged.
//
// The consumer is executed for its side effects, but the original value is returned.
//
// Type Parameters:
// - A: The type of value to consume and return
//
// Parameters:
// - c: A consumer function that performs side effects on the value
//
// Returns:
// - An Operator that chains the consumer and returns the original value
//
// Example:
//
// logUser := func(u User) {
// log.Printf("User: %s", u.Name)
// }
//
// pipeline := F.Pipe3(
// fetchUser(123),
// ChainFirstConsumer(logUser), // Logs but passes user through
// Map(func(u User) string { return u.Email }),
// )
//
//go:inline
func ChainFirstConsumer[A any](c Consumer[A]) Operator[A, A] {
return ChainFirstIOK(io.FromConsumer(c))
}

View File

@@ -44,11 +44,11 @@ var (
)
// Close closes an object
func Close[C io.Closer](c C) RIOE.ReaderIOResult[any] {
func Close[C io.Closer](c C) RIOE.ReaderIOResult[struct{}] {
return F.Pipe2(
c,
IOEF.Close[C],
RIOE.FromIOEither[any],
RIOE.FromIOEither[struct{}],
)
}

View File

@@ -0,0 +1,51 @@
// Copyright (c) 2023 - 2025 IBM Corp.
// All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package readerioresult
import (
"context"
RIOR "github.com/IBM/fp-go/v2/readerioresult"
)
// FilterOrElse filters a ReaderIOResult value based on a predicate.
// This is a convenience wrapper around readerioresult.FilterOrElse that fixes
// the context type to context.Context.
//
// If the predicate returns true for the Right value, it passes through unchanged.
// If the predicate returns false, it transforms the Right value into a Left (error) using onFalse.
// Left values are passed through unchanged.
//
// Parameters:
// - pred: A predicate function that tests the Right value
// - onFalse: A function that converts the failing value into an error
//
// Returns:
// - An Operator that filters ReaderIOResult values based on the predicate
//
// Example:
//
// // Validate that a number is positive
// isPositive := N.MoreThan(0)
// onNegative := func(n int) error { return fmt.Errorf("%d is not positive", n) }
//
// filter := readerioresult.FilterOrElse(isPositive, onNegative)
// result := filter(readerioresult.Right(42))(context.Background())()
//
//go:inline
func FilterOrElse[A any](pred Predicate[A], onFalse func(A) error) Operator[A, A] {
return RIOR.FilterOrElse[context.Context](pred, onFalse)
}

View File

@@ -0,0 +1,295 @@
// Copyright (c) 2023 - 2025 IBM Corp.
// All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package readerioresult
import (
"context"
"github.com/IBM/fp-go/v2/reader"
RIO "github.com/IBM/fp-go/v2/readerio"
RIOR "github.com/IBM/fp-go/v2/readerioresult"
RR "github.com/IBM/fp-go/v2/readerresult"
)
// SequenceReader transforms a ReaderIOResult containing a Reader into a function that
// takes the Reader's environment first, then returns a ReaderIOResult.
//
// This function "flips" or "sequences" the nested structure, changing the order in which
// parameters are applied. It's particularly useful for point-free style programming where
// you want to partially apply the inner Reader's environment before dealing with the
// outer context.
//
// Type transformation:
//
// From: ReaderIOResult[Reader[R, A]]
// = func(context.Context) func() Either[error, func(R) A]
//
// To: func(context.Context) func(R) IOResult[A]
// = func(context.Context) func(R) func() Either[error, A]
//
// This allows you to:
// 1. Provide the context.Context first
// 2. Then provide the Reader's environment R
// 3. Finally execute the IO effect to get Either[error, A]
//
// Point-free style benefits:
// - Enables partial application of the Reader environment
// - Facilitates composition of Reader-based computations
// - Allows building reusable computation pipelines
// - Supports dependency injection patterns where R represents dependencies
//
// Example:
//
// type Config struct {
// Timeout int
// }
//
// // A computation that produces a Reader based on context
// func getMultiplier(ctx context.Context) func() Either[error, func(Config) int] {
// return func() Either[error, func(Config) int] {
// return Right[error](func(cfg Config) int {
// return cfg.Timeout * 2
// })
// }
// }
//
// // Sequence it to apply Config first
// sequenced := SequenceReader[Config, int](getMultiplier)
//
// // Now we can partially apply the Config
// cfg := Config{Timeout: 30}
// ctx := context.Background()
// result := sequenced(ctx)(cfg)() // Returns Right(60)
//
// This is especially useful in point-free style when building computation pipelines:
//
// var pipeline = F.Flow3(
// loadConfig, // ReaderIOResult[Reader[Database, Config]]
// SequenceReader, // func(context.Context) func(Database) IOResult[Config]
// applyToDatabase(db), // IOResult[Config]
// )
//
//go:inline
func SequenceReader[R, A any](ma ReaderIOResult[Reader[R, A]]) Kleisli[R, A] {
return RIOR.SequenceReader(ma)
}
// SequenceReaderIO transforms a ReaderIOResult containing a ReaderIO into a function that
// takes the ReaderIO's environment first, then returns a ReaderIOResult.
//
// This is similar to SequenceReader but works with ReaderIO, which represents a computation
// that depends on an environment R and performs IO effects.
//
// Type transformation:
//
// From: ReaderIOResult[ReaderIO[R, A]]
// = func(context.Context) func() Either[error, func(R) func() A]
//
// To: func(context.Context) func(R) IOResult[A]
// = func(context.Context) func(R) func() Either[error, A]
//
// The key difference from SequenceReader is that the inner computation (ReaderIO) already
// performs IO effects, so the sequencing combines these effects properly.
//
// Point-free style benefits:
// - Enables composition of ReaderIO-based computations
// - Allows partial application of environment before IO execution
// - Facilitates building effect pipelines with dependency injection
// - Supports layered architecture where R represents service dependencies
//
// Example:
//
// type Database struct {
// ConnectionString string
// }
//
// // A computation that produces a ReaderIO based on context
// func getQuery(ctx context.Context) func() Either[error, func(Database) func() string] {
// return func() Either[error, func(Database) func() string] {
// return Right[error](func(db Database) func() string {
// return func() string {
// // Perform actual IO here
// return "Query result from " + db.ConnectionString
// }
// })
// }
// }
//
// // Sequence it to apply Database first
// sequenced := SequenceReaderIO[Database, string](getQuery)
//
// // Partially apply the Database
// db := Database{ConnectionString: "localhost:5432"}
// ctx := context.Background()
// result := sequenced(ctx)(db)() // Executes IO and returns Right("Query result...")
//
// In point-free style, this enables clean composition:
//
// var executeQuery = F.Flow3(
// prepareQuery, // ReaderIOResult[ReaderIO[Database, QueryResult]]
// SequenceReaderIO, // func(context.Context) func(Database) IOResult[QueryResult]
// withDatabase(db), // IOResult[QueryResult]
// )
//
//go:inline
func SequenceReaderIO[R, A any](ma ReaderIOResult[RIO.ReaderIO[R, A]]) Kleisli[R, A] {
return RIOR.SequenceReaderIO(ma)
}
// SequenceReaderResult transforms a ReaderIOResult containing a ReaderResult into a function
// that takes the ReaderResult's environment first, then returns a ReaderIOResult.
//
// This is similar to SequenceReader but works with ReaderResult, which represents a computation
// that depends on an environment R and can fail with an error.
//
// Type transformation:
//
// From: ReaderIOResult[ReaderResult[R, A]]
// = func(context.Context) func() Either[error, func(R) Either[error, A]]
//
// To: func(context.Context) func(R) IOResult[A]
// = func(context.Context) func(R) func() Either[error, A]
//
// The sequencing properly combines the error handling from both the outer ReaderIOResult
// and the inner ReaderResult, ensuring that errors from either level are propagated correctly.
//
// Point-free style benefits:
// - Enables composition of error-handling computations with dependency injection
// - Allows partial application of dependencies before error handling
// - Facilitates building validation pipelines with environment dependencies
// - Supports service-oriented architectures with proper error propagation
//
// Example:
//
// type Config struct {
// MaxRetries int
// }
//
// // A computation that produces a ReaderResult based on context
// func validateRetries(ctx context.Context) func() Either[error, func(Config) Either[error, int]] {
// return func() Either[error, func(Config) Either[error, int]] {
// return Right[error](func(cfg Config) Either[error, int] {
// if cfg.MaxRetries < 0 {
// return Left[int](errors.New("negative retries"))
// }
// return Right[error](cfg.MaxRetries)
// })
// }
// }
//
// // Sequence it to apply Config first
// sequenced := SequenceReaderResult[Config, int](validateRetries)
//
// // Partially apply the Config
// cfg := Config{MaxRetries: 3}
// ctx := context.Background()
// result := sequenced(ctx)(cfg)() // Returns Right(3)
//
// // With invalid config
// badCfg := Config{MaxRetries: -1}
// badResult := sequenced(ctx)(badCfg)() // Returns Left(error("negative retries"))
//
// In point-free style, this enables validation pipelines:
//
// var validateAndProcess = F.Flow4(
// loadConfig, // ReaderIOResult[ReaderResult[Config, Settings]]
// SequenceReaderResult, // func(context.Context) func(Config) IOResult[Settings]
// applyConfig(cfg), // IOResult[Settings]
// Chain(processSettings), // IOResult[Result]
// )
//
//go:inline
func SequenceReaderResult[R, A any](ma ReaderIOResult[RR.ReaderResult[R, A]]) Kleisli[R, A] {
return RIOR.SequenceReaderEither(ma)
}
// TraverseReader transforms a ReaderIOResult computation by applying a Reader-based function,
// effectively introducing a new environment dependency.
//
// This function takes a Reader-based transformation (Kleisli arrow) and returns a function that
// can transform a ReaderIOResult. The result allows you to provide the Reader's environment (R)
// first, which then produces a ReaderIOResult that depends on the context.
//
// Type transformation:
//
// From: ReaderIOResult[A]
// = func(context.Context) func() Either[error, A]
//
// With: reader.Kleisli[R, A, B]
// = func(A) func(R) B
//
// To: func(ReaderIOResult[A]) func(R) ReaderIOResult[B]
// = func(ReaderIOResult[A]) func(R) func(context.Context) func() Either[error, B]
//
// This enables:
// 1. Transforming values within a ReaderIOResult using environment-dependent logic
// 2. Introducing new environment dependencies into existing computations
// 3. Building composable pipelines where transformations depend on configuration or dependencies
// 4. Point-free style composition with Reader-based transformations
//
// Type Parameters:
// - R: The environment type that the Reader depends on
// - A: The input value type
// - B: The output value type
//
// Parameters:
// - f: A Reader-based Kleisli arrow that transforms A to B using environment R
//
// Returns:
// - A function that takes a ReaderIOResult[A] and returns a Kleisli[R, B],
// which is func(R) ReaderIOResult[B]
//
// The function preserves error handling and IO effects while adding the Reader environment dependency.
//
// Example:
//
// type Config struct {
// Multiplier int
// }
//
// // A Reader-based transformation that depends on Config
// multiply := func(x int) func(Config) int {
// return func(cfg Config) int {
// return x * cfg.Multiplier
// }
// }
//
// // Original computation that produces an int
// computation := Right[int](10)
//
// // Apply TraverseReader to introduce Config dependency
// traversed := TraverseReader[Config, int, int](multiply)
// result := traversed(computation)
//
// // Now we can provide the Config to get the final result
// cfg := Config{Multiplier: 5}
// ctx := context.Background()
// finalResult := result(cfg)(ctx)() // Returns Right(50)
//
// In point-free style, this enables clean composition:
//
// var pipeline = F.Flow3(
// loadValue, // ReaderIOResult[int]
// TraverseReader(multiplyByConfig), // func(Config) ReaderIOResult[int]
// applyConfig(cfg), // ReaderIOResult[int]
// )
//
//go:inline
func TraverseReader[R, A, B any](
f reader.Kleisli[R, A, B],
) func(ReaderIOResult[A]) Kleisli[R, B] {
return RIOR.TraverseReader[context.Context](f)
}

View File

@@ -0,0 +1,333 @@
// Copyright (c) 2023 - 2025 IBM Corp.
// All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package readerioresult_test
import (
"context"
"fmt"
RIOE "github.com/IBM/fp-go/v2/context/readerioresult"
"github.com/IBM/fp-go/v2/either"
F "github.com/IBM/fp-go/v2/function"
)
// Example_sequenceReader_basicUsage demonstrates the basic usage of SequenceReader
// to flip the parameter order, enabling point-free style programming.
func Example_sequenceReader_basicUsage() {
type Config struct {
Multiplier int
}
// A computation that produces a Reader based on context
getComputation := func(ctx context.Context) func() either.Either[error, func(Config) int] {
return func() either.Either[error, func(Config) int] {
// This could check context for cancellation, deadlines, etc.
return either.Right[error](func(cfg Config) int {
return cfg.Multiplier * 10
})
}
}
// Sequence it to flip the parameter order
// Now Config comes first, then context
sequenced := RIOE.SequenceReader(getComputation)
// Partially apply the Config - this is the key benefit for point-free style
cfg := Config{Multiplier: 5}
withConfig := sequenced(cfg)
// Now we have a ReaderIOResult[int] that can be used with any context
ctx := context.Background()
result := withConfig(ctx)()
if value, err := either.Unwrap(result); err == nil {
fmt.Println(value)
}
// Output: 50
}
// Example_sequenceReader_dependencyInjection demonstrates how SequenceReader
// enables clean dependency injection patterns in point-free style.
func Example_sequenceReader_dependencyInjection() {
// Define our dependencies
type Database struct {
ConnectionString string
}
type UserService struct {
db Database
}
// A function that creates a computation requiring a Database
makeQuery := func(ctx context.Context) func() either.Either[error, func(Database) string] {
return func() either.Either[error, func(Database) string] {
return either.Right[error](func(db Database) string {
return fmt.Sprintf("Querying %s", db.ConnectionString)
})
}
}
// Sequence to enable dependency injection
queryWithDB := RIOE.SequenceReader(makeQuery)
// Inject the database dependency
db := Database{ConnectionString: "localhost:5432"}
query := queryWithDB(db)
// Execute with context
ctx := context.Background()
result := query(ctx)()
if value, err := either.Unwrap(result); err == nil {
fmt.Println(value)
}
// Output: Querying localhost:5432
}
// Example_sequenceReader_pointFreeComposition demonstrates how SequenceReader
// enables point-free style composition of computations.
func Example_sequenceReader_pointFreeComposition() {
type Config struct {
BaseValue int
}
// Step 1: Create a computation that produces a Reader
step1 := func(ctx context.Context) func() either.Either[error, func(Config) int] {
return func() either.Either[error, func(Config) int] {
return either.Right[error](func(cfg Config) int {
return cfg.BaseValue * 2
})
}
}
// Step 2: Sequence it to enable partial application
sequenced := RIOE.SequenceReader(step1)
// Step 3: Build a pipeline using point-free style
// Partially apply the config
cfg := Config{BaseValue: 10}
// Create a reusable computation with the config baked in
computation := F.Pipe1(
sequenced(cfg),
RIOE.Map(func(x int) int { return x + 5 }),
)
// Execute the pipeline
ctx := context.Background()
result := computation(ctx)()
if value, err := either.Unwrap(result); err == nil {
fmt.Println(value)
}
// Output: 25
}
// Example_sequenceReader_multipleEnvironments demonstrates using SequenceReader
// to work with multiple environment types in a clean, composable way.
func Example_sequenceReader_multipleEnvironments() {
type DatabaseConfig struct {
Host string
Port int
}
type APIConfig struct {
Endpoint string
APIKey string
}
// Function that needs DatabaseConfig
getDatabaseURL := func(ctx context.Context) func() either.Either[error, func(DatabaseConfig) string] {
return func() either.Either[error, func(DatabaseConfig) string] {
return either.Right[error](func(cfg DatabaseConfig) string {
return fmt.Sprintf("%s:%d", cfg.Host, cfg.Port)
})
}
}
// Function that needs APIConfig
getAPIURL := func(ctx context.Context) func() either.Either[error, func(APIConfig) string] {
return func() either.Either[error, func(APIConfig) string] {
return either.Right[error](func(cfg APIConfig) string {
return cfg.Endpoint
})
}
}
// Sequence both to enable partial application
withDBConfig := RIOE.SequenceReader(getDatabaseURL)
withAPIConfig := RIOE.SequenceReader(getAPIURL)
// Partially apply different configs
dbCfg := DatabaseConfig{Host: "localhost", Port: 5432}
apiCfg := APIConfig{Endpoint: "https://api.example.com", APIKey: "secret"}
dbQuery := withDBConfig(dbCfg)
apiQuery := withAPIConfig(apiCfg)
// Execute both with the same context
ctx := context.Background()
dbResult := dbQuery(ctx)()
apiResult := apiQuery(ctx)()
if dbURL, err := either.Unwrap(dbResult); err == nil {
fmt.Println("Database:", dbURL)
}
if apiURL, err := either.Unwrap(apiResult); err == nil {
fmt.Println("API:", apiURL)
}
// Output:
// Database: localhost:5432
// API: https://api.example.com
}
// Example_sequenceReaderResult_errorHandling demonstrates how SequenceReaderResult
// enables point-free style with proper error handling at multiple levels.
func Example_sequenceReaderResult_errorHandling() {
type ValidationConfig struct {
MinValue int
MaxValue int
}
// A computation that can fail at both outer and inner levels
makeValidator := func(ctx context.Context) func() either.Either[error, func(context.Context) either.Either[error, int]] {
return func() either.Either[error, func(context.Context) either.Either[error, int]] {
// Outer level: check context
if ctx.Err() != nil {
return either.Left[func(context.Context) either.Either[error, int]](ctx.Err())
}
// Return inner computation
return either.Right[error](func(innerCtx context.Context) either.Either[error, int] {
// Inner level: perform validation
value := 42
if value < 0 {
return either.Left[int](fmt.Errorf("value too small: %d", value))
}
if value > 100 {
return either.Left[int](fmt.Errorf("value too large: %d", value))
}
return either.Right[error](value)
})
}
}
// Sequence to enable point-free composition
sequenced := RIOE.SequenceReaderResult(makeValidator)
// Build a pipeline with error handling
ctx := context.Background()
pipeline := F.Pipe2(
sequenced(ctx),
RIOE.Map(func(x int) int { return x * 2 }),
RIOE.Chain(func(x int) RIOE.ReaderIOResult[string] {
return RIOE.Of(fmt.Sprintf("Result: %d", x))
}),
)
result := pipeline(ctx)()
if value, err := either.Unwrap(result); err == nil {
fmt.Println(value)
}
// Output: Result: 84
}
// Example_sequenceReader_partialApplication demonstrates the power of partial
// application enabled by SequenceReader for building reusable computations.
func Example_sequenceReader_partialApplication() {
type ServiceConfig struct {
ServiceName string
Version string
}
// Create a computation factory
makeServiceInfo := func(ctx context.Context) func() either.Either[error, func(ServiceConfig) string] {
return func() either.Either[error, func(ServiceConfig) string] {
return either.Right[error](func(cfg ServiceConfig) string {
return fmt.Sprintf("%s v%s", cfg.ServiceName, cfg.Version)
})
}
}
// Sequence it
sequenced := RIOE.SequenceReader(makeServiceInfo)
// Create multiple service configurations
authConfig := ServiceConfig{ServiceName: "AuthService", Version: "1.0.0"}
userConfig := ServiceConfig{ServiceName: "UserService", Version: "2.1.0"}
// Partially apply each config to create specialized computations
getAuthInfo := sequenced(authConfig)
getUserInfo := sequenced(userConfig)
// These can now be reused across different contexts
ctx := context.Background()
authResult := getAuthInfo(ctx)()
userResult := getUserInfo(ctx)()
if auth, err := either.Unwrap(authResult); err == nil {
fmt.Println(auth)
}
if user, err := either.Unwrap(userResult); err == nil {
fmt.Println(user)
}
// Output:
// AuthService v1.0.0
// UserService v2.1.0
}
// Example_sequenceReader_testingBenefits demonstrates how SequenceReader
// makes testing easier by allowing you to inject test dependencies.
func Example_sequenceReader_testingBenefits() {
// Simple logger that collects messages
type SimpleLogger struct {
Messages []string
}
// A computation that depends on a logger (using the struct directly)
makeLoggingOperation := func(ctx context.Context) func() either.Either[error, func(*SimpleLogger) string] {
return func() either.Either[error, func(*SimpleLogger) string] {
return either.Right[error](func(logger *SimpleLogger) string {
logger.Messages = append(logger.Messages, "Operation started")
result := "Success"
logger.Messages = append(logger.Messages, fmt.Sprintf("Operation completed: %s", result))
return result
})
}
}
// Sequence to enable dependency injection
sequenced := RIOE.SequenceReader(makeLoggingOperation)
// Inject a test logger
testLogger := &SimpleLogger{Messages: []string{}}
operation := sequenced(testLogger)
// Execute
ctx := context.Background()
result := operation(ctx)()
if value, err := either.Unwrap(result); err == nil {
fmt.Println("Result:", value)
fmt.Println("Logs:", len(testLogger.Messages))
}
// Output:
// Result: Success
// Logs: 2
}

View File

@@ -0,0 +1,866 @@
// Copyright (c) 2023 - 2025 IBM Corp.
// All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package readerioresult
import (
"context"
"errors"
"fmt"
"testing"
"github.com/IBM/fp-go/v2/either"
"github.com/stretchr/testify/assert"
)
func TestSequenceReader(t *testing.T) {
t.Run("flips parameter order for simple types", func(t *testing.T) {
// Original: ReaderIOResult[Reader[string, int]]
// = func(context.Context) func() Either[error, func(string) int]
original := func(ctx context.Context) func() Either[Reader[string, int]] {
return func() Either[Reader[string, int]] {
return either.Right[error](func(s string) int {
return 10 + len(s)
})
}
}
// Sequenced: func(string) func(context.Context) IOResult[int]
// The Reader environment (string) is now the first parameter
sequenced := SequenceReader(original)
ctx := context.Background()
// Test original
result1 := original(ctx)()
assert.True(t, either.IsRight(result1))
innerFunc1, _ := either.Unwrap(result1)
value1 := innerFunc1("hello")
assert.Equal(t, 15, value1)
// Test sequenced - note the flipped order: string first, then context
result2 := sequenced("hello")(ctx)()
assert.True(t, either.IsRight(result2))
value2, _ := either.Unwrap(result2)
assert.Equal(t, 15, value2)
})
t.Run("flips parameter order for struct types", func(t *testing.T) {
type Database struct {
ConnectionString string
}
// Original: ReaderIOResult[Reader[Database, string]]
query := func(ctx context.Context) func() Either[Reader[Database, string]] {
return func() Either[Reader[Database, string]] {
if ctx.Err() != nil {
return either.Left[Reader[Database, string]](ctx.Err())
}
return either.Right[error](func(db Database) string {
return fmt.Sprintf("Query on %s", db.ConnectionString)
})
}
}
db := Database{ConnectionString: "localhost:5432"}
ctx := context.Background()
expected := "Query on localhost:5432"
// Sequence it
sequenced := SequenceReader(query)
// Test original with valid inputs
result1 := query(ctx)()
assert.True(t, either.IsRight(result1))
innerFunc1, _ := either.Unwrap(result1)
value1 := innerFunc1(db)
assert.Equal(t, expected, value1)
// Test sequenced with valid inputs - Database first, then context
result2 := sequenced(db)(ctx)()
assert.True(t, either.IsRight(result2))
value2, _ := either.Unwrap(result2)
assert.Equal(t, expected, value2)
})
t.Run("preserves outer error", func(t *testing.T) {
expectedError := errors.New("outer error")
// Original that fails at outer level
original := func(ctx context.Context) func() Either[Reader[string, int]] {
return func() Either[Reader[string, int]] {
return either.Left[Reader[string, int]](expectedError)
}
}
ctx := context.Background()
// Test original with error
result1 := original(ctx)()
assert.True(t, either.IsLeft(result1))
_, err1 := either.Unwrap(result1)
assert.Equal(t, expectedError, err1)
// Test sequenced - the outer error is preserved
sequenced := SequenceReader(original)
result2 := sequenced("test")(ctx)()
assert.True(t, either.IsLeft(result2))
_, err2 := either.Unwrap(result2)
assert.Equal(t, expectedError, err2)
})
t.Run("preserves computation logic", func(t *testing.T) {
// Original function
original := func(ctx context.Context) func() Either[Reader[string, int]] {
return func() Either[Reader[string, int]] {
return either.Right[error](func(s string) int {
return 3 * len(s)
})
}
}
ctx := context.Background()
// Sequence
sequenced := SequenceReader(original)
// Test that sequence produces correct results
result1 := original(ctx)()
innerFunc1, _ := either.Unwrap(result1)
value1 := innerFunc1("test")
result2 := sequenced("test")(ctx)()
value2, _ := either.Unwrap(result2)
assert.Equal(t, value1, value2)
assert.Equal(t, 12, value2) // 3 * 4
})
t.Run("works with zero values", func(t *testing.T) {
original := func(ctx context.Context) func() Either[Reader[string, int]] {
return func() Either[Reader[string, int]] {
return either.Right[error](func(s string) int {
return len(s)
})
}
}
ctx := context.Background()
sequenced := SequenceReader(original)
// Test with zero values
result1 := original(ctx)()
innerFunc1, _ := either.Unwrap(result1)
value1 := innerFunc1("")
assert.Equal(t, 0, value1)
result2 := sequenced("")(ctx)()
value2, _ := either.Unwrap(result2)
assert.Equal(t, 0, value2)
})
t.Run("respects context cancellation", func(t *testing.T) {
original := func(ctx context.Context) func() Either[Reader[string, int]] {
return func() Either[Reader[string, int]] {
if ctx.Err() != nil {
return either.Left[Reader[string, int]](ctx.Err())
}
return either.Right[error](func(s string) int {
return len(s)
})
}
}
ctx, cancel := context.WithCancel(context.Background())
cancel()
sequenced := SequenceReader(original)
result := sequenced("test")(ctx)()
assert.True(t, either.IsLeft(result))
_, err := either.Unwrap(result)
assert.Equal(t, context.Canceled, err)
})
t.Run("enables point-free style with partial application", func(t *testing.T) {
type Config struct {
Multiplier int
}
// Original computation
original := func(ctx context.Context) func() Either[Reader[Config, int]] {
return func() Either[Reader[Config, int]] {
return either.Right[error](func(cfg Config) int {
return cfg.Multiplier * 10
})
}
}
// Sequence to enable partial application
sequenced := SequenceReader(original)
// Partially apply the Config
cfg := Config{Multiplier: 5}
withConfig := sequenced(cfg)
// Now we have a ReaderIOResult[int] that can be used in different contexts
ctx1 := context.Background()
result1 := withConfig(ctx1)()
assert.True(t, either.IsRight(result1))
value1, _ := either.Unwrap(result1)
assert.Equal(t, 50, value1)
// Can reuse with different context
ctx2 := context.Background()
result2 := withConfig(ctx2)()
assert.True(t, either.IsRight(result2))
value2, _ := either.Unwrap(result2)
assert.Equal(t, 50, value2)
})
}
func TestSequenceReaderIO(t *testing.T) {
t.Run("flips parameter order for simple types", func(t *testing.T) {
// Original: ReaderIOResult[ReaderIO[int]]
// = func(context.Context) func() Either[error, func(context.Context) func() int]
original := func(ctx context.Context) func() Either[ReaderIO[int]] {
return func() Either[ReaderIO[int]] {
return either.Right[error](func(innerCtx context.Context) func() int {
return func() int {
return 20
}
})
}
}
ctx := context.Background()
sequenced := SequenceReaderIO(original)
// Test original
result1 := original(ctx)()
assert.True(t, either.IsRight(result1))
innerFunc1, _ := either.Unwrap(result1)
value1 := innerFunc1(ctx)()
assert.Equal(t, 20, value1)
// Test sequenced - context first, then context again for inner ReaderIO
result2 := sequenced(ctx)(ctx)()
assert.True(t, either.IsRight(result2))
value2, _ := either.Unwrap(result2)
assert.Equal(t, 20, value2)
})
t.Run("preserves outer error", func(t *testing.T) {
expectedError := errors.New("outer error")
// Original that fails at outer level
original := func(ctx context.Context) func() Either[ReaderIO[int]] {
return func() Either[ReaderIO[int]] {
return either.Left[ReaderIO[int]](expectedError)
}
}
ctx := context.Background()
// Test original with error
result1 := original(ctx)()
assert.True(t, either.IsLeft(result1))
_, err1 := either.Unwrap(result1)
assert.Equal(t, expectedError, err1)
// Test sequenced - the outer error is preserved
sequenced := SequenceReaderIO(original)
result2 := sequenced(ctx)(ctx)()
assert.True(t, either.IsLeft(result2))
_, err2 := either.Unwrap(result2)
assert.Equal(t, expectedError, err2)
})
t.Run("respects context cancellation in outer context", func(t *testing.T) {
original := func(ctx context.Context) func() Either[ReaderIO[int]] {
return func() Either[ReaderIO[int]] {
if ctx.Err() != nil {
return either.Left[ReaderIO[int]](ctx.Err())
}
return either.Right[error](func(innerCtx context.Context) func() int {
return func() int {
return 20
}
})
}
}
ctx, cancel := context.WithCancel(context.Background())
cancel()
sequenced := SequenceReaderIO(original)
result := sequenced(ctx)(ctx)()
assert.True(t, either.IsLeft(result))
_, err := either.Unwrap(result)
assert.Equal(t, context.Canceled, err)
})
}
func TestSequenceReaderResult(t *testing.T) {
t.Run("flips parameter order for simple types", func(t *testing.T) {
// Original: ReaderIOResult[ReaderResult[int]]
// = func(context.Context) func() Either[error, func(context.Context) Either[error, int]]
original := func(ctx context.Context) func() Either[ReaderResult[int]] {
return func() Either[ReaderResult[int]] {
return either.Right[error](func(innerCtx context.Context) Either[int] {
return either.Right[error](20)
})
}
}
ctx := context.Background()
sequenced := SequenceReaderResult(original)
// Test original
result1 := original(ctx)()
assert.True(t, either.IsRight(result1))
innerFunc1, _ := either.Unwrap(result1)
innerResult1 := innerFunc1(ctx)
assert.True(t, either.IsRight(innerResult1))
value1, _ := either.Unwrap(innerResult1)
assert.Equal(t, 20, value1)
// Test sequenced
result2 := sequenced(ctx)(ctx)()
assert.True(t, either.IsRight(result2))
value2, _ := either.Unwrap(result2)
assert.Equal(t, 20, value2)
})
t.Run("preserves outer error", func(t *testing.T) {
expectedError := errors.New("outer error")
// Original that fails at outer level
original := func(ctx context.Context) func() Either[ReaderResult[int]] {
return func() Either[ReaderResult[int]] {
return either.Left[ReaderResult[int]](expectedError)
}
}
ctx := context.Background()
// Test original with error
result1 := original(ctx)()
assert.True(t, either.IsLeft(result1))
_, err1 := either.Unwrap(result1)
assert.Equal(t, expectedError, err1)
// Test sequenced - the outer error is preserved
sequenced := SequenceReaderResult(original)
result2 := sequenced(ctx)(ctx)()
assert.True(t, either.IsLeft(result2))
_, err2 := either.Unwrap(result2)
assert.Equal(t, expectedError, err2)
})
t.Run("preserves inner error", func(t *testing.T) {
expectedError := errors.New("inner error")
// Original that fails at inner level
original := func(ctx context.Context) func() Either[ReaderResult[int]] {
return func() Either[ReaderResult[int]] {
return either.Right[error](func(innerCtx context.Context) Either[int] {
return either.Left[int](expectedError)
})
}
}
ctx := context.Background()
// Test original with inner error
result1 := original(ctx)()
assert.True(t, either.IsRight(result1))
innerFunc1, _ := either.Unwrap(result1)
innerResult1 := innerFunc1(ctx)
assert.True(t, either.IsLeft(innerResult1))
_, innerErr1 := either.Unwrap(innerResult1)
assert.Equal(t, expectedError, innerErr1)
// Test sequenced with inner error
sequenced := SequenceReaderResult(original)
result2 := sequenced(ctx)(ctx)()
assert.True(t, either.IsLeft(result2))
_, innerErr2 := either.Unwrap(result2)
assert.Equal(t, expectedError, innerErr2)
})
t.Run("handles errors at different levels", func(t *testing.T) {
// Original that can fail at both levels
makeOriginal := func(x int) ReaderIOResult[ReaderResult[int]] {
return func(ctx context.Context) func() Either[ReaderResult[int]] {
return func() Either[ReaderResult[int]] {
if x < -10 {
return either.Left[ReaderResult[int]](errors.New("outer: too negative"))
}
return either.Right[error](func(innerCtx context.Context) Either[int] {
if x < 0 {
return either.Left[int](errors.New("inner: negative value"))
}
return either.Right[error](x * 2)
})
}
}
}
ctx := context.Background()
// Test outer error
sequenced1 := SequenceReaderResult(makeOriginal(-20))
result1 := sequenced1(ctx)(ctx)()
assert.True(t, either.IsLeft(result1))
_, err1 := either.Unwrap(result1)
assert.Contains(t, err1.Error(), "outer")
// Test inner error
sequenced2 := SequenceReaderResult(makeOriginal(-5))
result2 := sequenced2(ctx)(ctx)()
assert.True(t, either.IsLeft(result2))
_, err2 := either.Unwrap(result2)
assert.Contains(t, err2.Error(), "inner")
// Test success
sequenced3 := SequenceReaderResult(makeOriginal(10))
result3 := sequenced3(ctx)(ctx)()
assert.True(t, either.IsRight(result3))
value3, _ := either.Unwrap(result3)
assert.Equal(t, 20, value3)
})
t.Run("respects context cancellation", func(t *testing.T) {
original := func(ctx context.Context) func() Either[ReaderResult[int]] {
return func() Either[ReaderResult[int]] {
if ctx.Err() != nil {
return either.Left[ReaderResult[int]](ctx.Err())
}
return either.Right[error](func(innerCtx context.Context) Either[int] {
if innerCtx.Err() != nil {
return either.Left[int](innerCtx.Err())
}
return either.Right[error](20)
})
}
}
ctx, cancel := context.WithCancel(context.Background())
cancel()
sequenced := SequenceReaderResult(original)
result := sequenced(ctx)(ctx)()
assert.True(t, either.IsLeft(result))
_, err := either.Unwrap(result)
assert.Equal(t, context.Canceled, err)
})
}
func TestSequenceEdgeCases(t *testing.T) {
t.Run("works with empty struct", func(t *testing.T) {
type Empty struct{}
original := func(ctx context.Context) func() Either[Reader[Empty, int]] {
return func() Either[Reader[Empty, int]] {
return either.Right[error](func(e Empty) int {
return 20
})
}
}
ctx := context.Background()
empty := Empty{}
sequenced := SequenceReader(original)
result1 := original(ctx)()
innerFunc1, _ := either.Unwrap(result1)
value1 := innerFunc1(empty)
assert.Equal(t, 20, value1)
result2 := sequenced(empty)(ctx)()
value2, _ := either.Unwrap(result2)
assert.Equal(t, 20, value2)
})
t.Run("works with pointer types", func(t *testing.T) {
type Data struct {
Value int
}
original := func(ctx context.Context) func() Either[Reader[*Data, int]] {
return func() Either[Reader[*Data, int]] {
return either.Right[error](func(d *Data) int {
if d == nil {
return 42
}
return 42 + d.Value
})
}
}
ctx := context.Background()
data := &Data{Value: 100}
sequenced := SequenceReader(original)
// Test with non-nil pointer
result1 := original(ctx)()
innerFunc1, _ := either.Unwrap(result1)
value1 := innerFunc1(data)
assert.Equal(t, 142, value1)
result2 := sequenced(data)(ctx)()
value2, _ := either.Unwrap(result2)
assert.Equal(t, 142, value2)
// Test with nil pointer
result3 := sequenced(nil)(ctx)()
value3, _ := either.Unwrap(result3)
assert.Equal(t, 42, value3)
})
t.Run("maintains referential transparency", func(t *testing.T) {
// The same inputs should always produce the same outputs
original := func(ctx context.Context) func() Either[Reader[string, int]] {
return func() Either[Reader[string, int]] {
return either.Right[error](func(s string) int {
return 10 + len(s)
})
}
}
ctx := context.Background()
sequenced := SequenceReader(original)
// Call multiple times with same inputs
for range 5 {
result1 := original(ctx)()
innerFunc1, _ := either.Unwrap(result1)
value1 := innerFunc1("hello")
assert.Equal(t, 15, value1)
result2 := sequenced("hello")(ctx)()
value2, _ := either.Unwrap(result2)
assert.Equal(t, 15, value2)
}
})
}
func TestTraverseReader(t *testing.T) {
t.Run("basic transformation with Reader dependency", func(t *testing.T) {
type Config struct {
Multiplier int
}
// Original computation
original := Right(10)
// Reader-based transformation
multiply := func(x int) Reader[Config, int] {
return func(cfg Config) int {
return x * cfg.Multiplier
}
}
// Apply TraverseReader
traversed := TraverseReader(multiply)
result := traversed(original)
// Provide Config and execute
cfg := Config{Multiplier: 5}
ctx := context.Background()
finalResult := result(cfg)(ctx)()
assert.True(t, either.IsRight(finalResult))
value, _ := either.Unwrap(finalResult)
assert.Equal(t, 50, value)
})
t.Run("preserves outer error", func(t *testing.T) {
type Config struct {
Multiplier int
}
expectedError := errors.New("computation failed")
// Original computation that fails
original := Left[int](expectedError)
// Reader-based transformation (won't be called)
multiply := func(x int) Reader[Config, int] {
return func(cfg Config) int {
return x * cfg.Multiplier
}
}
// Apply TraverseReader
traversed := TraverseReader(multiply)
result := traversed(original)
// Provide Config and execute
cfg := Config{Multiplier: 5}
ctx := context.Background()
finalResult := result(cfg)(ctx)()
assert.True(t, either.IsLeft(finalResult))
_, err := either.Unwrap(finalResult)
assert.Equal(t, expectedError, err)
})
t.Run("works with different types", func(t *testing.T) {
type Database struct {
Prefix string
}
// Original computation producing an int
original := Right(42)
// Reader-based transformation: int -> string using Database
format := func(x int) func(Database) string {
return func(db Database) string {
return fmt.Sprintf("%s:%d", db.Prefix, x)
}
}
// Apply TraverseReader
traversed := TraverseReader(format)
result := traversed(original)
// Provide Database and execute
db := Database{Prefix: "ID"}
ctx := context.Background()
finalResult := result(db)(ctx)()
assert.True(t, either.IsRight(finalResult))
value, _ := either.Unwrap(finalResult)
assert.Equal(t, "ID:42", value)
})
t.Run("works with struct environments", func(t *testing.T) {
type Settings struct {
Prefix string
Suffix string
}
// Original computation
original := Right("value")
// Reader-based transformation using Settings
decorate := func(s string) func(Settings) string {
return func(settings Settings) string {
return settings.Prefix + s + settings.Suffix
}
}
// Apply TraverseReader
traversed := TraverseReader(decorate)
result := traversed(original)
// Provide Settings and execute
settings := Settings{Prefix: "[", Suffix: "]"}
ctx := context.Background()
finalResult := result(settings)(ctx)()
assert.True(t, either.IsRight(finalResult))
value, _ := either.Unwrap(finalResult)
assert.Equal(t, "[value]", value)
})
t.Run("enables partial application", func(t *testing.T) {
type Config struct {
Factor int
}
// Original computation
original := Right(10)
// Reader-based transformation
scale := func(x int) Reader[Config, int] {
return func(cfg Config) int {
return x * cfg.Factor
}
}
// Apply TraverseReader
traversed := TraverseReader(scale)
result := traversed(original)
// Partially apply Config
cfg := Config{Factor: 3}
withConfig := result(cfg)
// Can now use with different contexts
ctx1 := context.Background()
finalResult1 := withConfig(ctx1)()
assert.True(t, either.IsRight(finalResult1))
value1, _ := either.Unwrap(finalResult1)
assert.Equal(t, 30, value1)
// Reuse with different context
ctx2 := context.Background()
finalResult2 := withConfig(ctx2)()
assert.True(t, either.IsRight(finalResult2))
value2, _ := either.Unwrap(finalResult2)
assert.Equal(t, 30, value2)
})
t.Run("respects context cancellation", func(t *testing.T) {
type Config struct {
Value int
}
// Original computation that checks context
original := func(ctx context.Context) func() Either[int] {
return func() Either[int] {
if ctx.Err() != nil {
return either.Left[int](ctx.Err())
}
return either.Right[error](10)
}
}
// Reader-based transformation
multiply := func(x int) Reader[Config, int] {
return func(cfg Config) int {
return x * cfg.Value
}
}
// Apply TraverseReader
traversed := TraverseReader(multiply)
result := traversed(original)
// Use canceled context
ctx, cancel := context.WithCancel(context.Background())
cancel()
cfg := Config{Value: 5}
finalResult := result(cfg)(ctx)()
assert.True(t, either.IsLeft(finalResult))
_, err := either.Unwrap(finalResult)
assert.Equal(t, context.Canceled, err)
})
t.Run("works with zero values", func(t *testing.T) {
type Config struct {
Offset int
}
// Original computation with zero value
original := Right(0)
// Reader-based transformation
add := func(x int) Reader[Config, int] {
return func(cfg Config) int {
return x + cfg.Offset
}
}
// Apply TraverseReader
traversed := TraverseReader(add)
result := traversed(original)
// Provide Config with zero offset
cfg := Config{Offset: 0}
ctx := context.Background()
finalResult := result(cfg)(ctx)()
assert.True(t, either.IsRight(finalResult))
value, _ := either.Unwrap(finalResult)
assert.Equal(t, 0, value)
})
t.Run("chains multiple transformations", func(t *testing.T) {
type Config struct {
Multiplier int
}
// Original computation
original := Right(5)
// First Reader-based transformation
multiply := func(x int) Reader[Config, int] {
return func(cfg Config) int {
return x * cfg.Multiplier
}
}
// Apply TraverseReader
traversed := TraverseReader(multiply)
result := traversed(original)
// Provide Config and execute
cfg := Config{Multiplier: 4}
ctx := context.Background()
finalResult := result(cfg)(ctx)()
assert.True(t, either.IsRight(finalResult))
value, _ := either.Unwrap(finalResult)
assert.Equal(t, 20, value) // 5 * 4 = 20
})
t.Run("works with complex Reader logic", func(t *testing.T) {
type ValidationRules struct {
MinValue int
MaxValue int
}
// Original computation
original := Right(50)
// Reader-based transformation with validation logic
validate := func(x int) func(ValidationRules) int {
return func(rules ValidationRules) int {
if x < rules.MinValue {
return rules.MinValue
}
if x > rules.MaxValue {
return rules.MaxValue
}
return x
}
}
// Apply TraverseReader
traversed := TraverseReader(validate)
result := traversed(original)
// Test with value within range
rules1 := ValidationRules{MinValue: 0, MaxValue: 100}
ctx := context.Background()
finalResult1 := result(rules1)(ctx)()
assert.True(t, either.IsRight(finalResult1))
value1, _ := either.Unwrap(finalResult1)
assert.Equal(t, 50, value1)
// Test with value above max
rules2 := ValidationRules{MinValue: 0, MaxValue: 30}
finalResult2 := result(rules2)(ctx)()
assert.True(t, either.IsRight(finalResult2))
value2, _ := either.Unwrap(finalResult2)
assert.Equal(t, 30, value2) // Clamped to max
// Test with value below min
rules3 := ValidationRules{MinValue: 60, MaxValue: 100}
finalResult3 := result(rules3)(ctx)()
assert.True(t, either.IsRight(finalResult3))
value3, _ := either.Unwrap(finalResult3)
assert.Equal(t, 60, value3) // Clamped to min
})
}

View File

@@ -53,12 +53,12 @@ import (
RIOE "github.com/IBM/fp-go/v2/context/readerioresult"
RIOEH "github.com/IBM/fp-go/v2/context/readerioresult/http"
E "github.com/IBM/fp-go/v2/either"
F "github.com/IBM/fp-go/v2/function"
R "github.com/IBM/fp-go/v2/http/builder"
H "github.com/IBM/fp-go/v2/http/headers"
LZ "github.com/IBM/fp-go/v2/lazy"
O "github.com/IBM/fp-go/v2/option"
"github.com/IBM/fp-go/v2/result"
)
// Requester converts an http/builder.Builder into a ReaderIOResult that produces HTTP requests.
@@ -143,10 +143,10 @@ func Requester(builder *R.Builder) RIOEH.Requester {
return F.Pipe5(
builder.GetBody(),
O.Fold(LZ.Of(E.Of[error](withoutBody)), E.Map[error](withBody)),
E.Ap[func(string) RIOE.ReaderIOResult[*http.Request]](builder.GetTargetURL()),
E.Flap[error, RIOE.ReaderIOResult[*http.Request]](builder.GetMethod()),
E.GetOrElse(RIOE.Left[*http.Request]),
O.Fold(LZ.Of(result.Of(withoutBody)), result.Map(withBody)),
result.Ap[RIOE.Kleisli[string, *http.Request]](builder.GetTargetURL()),
result.Flap[RIOE.ReaderIOResult[*http.Request]](builder.GetMethod()),
result.GetOrElse(RIOE.Left[*http.Request]),
RIOE.Map(func(req *http.Request) *http.Request {
req.Header = H.Monoid.Concat(req.Header, builder.GetHeaders())
return req

View File

@@ -73,7 +73,7 @@ type (
// It wraps a standard http.Client and provides functional HTTP operations.
client struct {
delegate *http.Client
doIOE func(*http.Request) IOE.IOEither[error, *http.Response]
doIOE IOE.Kleisli[error, *http.Request, *http.Response]
}
)
@@ -158,7 +158,7 @@ func MakeClient(httpClient *http.Client) Client {
// request := MakeGetRequest("https://api.example.com/data")
// fullResp := ReadFullResponse(client)(request)
// result := fullResp(context.Background())()
func ReadFullResponse(client Client) func(Requester) RIOE.ReaderIOResult[H.FullResponse] {
func ReadFullResponse(client Client) RIOE.Kleisli[Requester, H.FullResponse] {
return func(req Requester) RIOE.ReaderIOResult[H.FullResponse] {
return F.Flow3(
client.Do(req),
@@ -195,7 +195,7 @@ func ReadFullResponse(client Client) func(Requester) RIOE.ReaderIOResult[H.FullR
// request := MakeGetRequest("https://api.example.com/data")
// readBytes := ReadAll(client)
// result := readBytes(request)(context.Background())()
func ReadAll(client Client) func(Requester) RIOE.ReaderIOResult[[]byte] {
func ReadAll(client Client) RIOE.Kleisli[Requester, []byte] {
return F.Flow2(
ReadFullResponse(client),
RIOE.Map(H.Body),
@@ -219,7 +219,7 @@ func ReadAll(client Client) func(Requester) RIOE.ReaderIOResult[[]byte] {
// request := MakeGetRequest("https://api.example.com/text")
// readText := ReadText(client)
// result := readText(request)(context.Background())()
func ReadText(client Client) func(Requester) RIOE.ReaderIOResult[string] {
func ReadText(client Client) RIOE.Kleisli[Requester, string] {
return F.Flow2(
ReadAll(client),
RIOE.Map(B.ToString),
@@ -231,7 +231,7 @@ func ReadText(client Client) func(Requester) RIOE.ReaderIOResult[string] {
// Deprecated: Use [ReadJSON] instead. This function is kept for backward compatibility
// but will be removed in a future version. The capitalized version follows Go naming
// conventions for acronyms.
func ReadJson[A any](client Client) func(Requester) RIOE.ReaderIOResult[A] {
func ReadJson[A any](client Client) RIOE.Kleisli[Requester, A] {
return ReadJSON[A](client)
}
@@ -242,7 +242,7 @@ func ReadJson[A any](client Client) func(Requester) RIOE.ReaderIOResult[A] {
// 3. Reads the response body as bytes
//
// This function is used internally by ReadJSON to ensure proper JSON response handling.
func readJSON(client Client) func(Requester) RIOE.ReaderIOResult[[]byte] {
func readJSON(client Client) RIOE.Kleisli[Requester, []byte] {
return F.Flow3(
ReadFullResponse(client),
RIOE.ChainFirstEitherK(F.Flow2(
@@ -278,7 +278,7 @@ func readJSON(client Client) func(Requester) RIOE.ReaderIOResult[[]byte] {
// request := MakeGetRequest("https://api.example.com/user/1")
// readUser := ReadJSON[User](client)
// result := readUser(request)(context.Background())()
func ReadJSON[A any](client Client) func(Requester) RIOE.ReaderIOResult[A] {
func ReadJSON[A any](client Client) RIOE.Kleisli[Requester, A] {
return F.Flow2(
readJSON(client),
RIOE.ChainEitherK(J.Unmarshal[A]),

View File

@@ -0,0 +1,732 @@
// Copyright (c) 2023 - 2025 IBM Corp.
// All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package readerioresult provides logging utilities for ReaderIOResult computations.
// It includes functions for entry/exit logging with timing, correlation IDs, and context management.
package readerioresult
import (
"context"
"log/slog"
"sync/atomic"
"time"
"github.com/IBM/fp-go/v2/context/readerio"
F "github.com/IBM/fp-go/v2/function"
"github.com/IBM/fp-go/v2/io"
"github.com/IBM/fp-go/v2/logging"
"github.com/IBM/fp-go/v2/option"
"github.com/IBM/fp-go/v2/reader"
"github.com/IBM/fp-go/v2/result"
)
type (
// loggingContextKeyType is the type used as a key for storing logging information in context.Context
loggingContextKeyType int
// LoggingID is a unique identifier assigned to each logged operation for correlation
LoggingID uint64
// loggingContext holds the logging state for a computation, including timing,
// correlation ID, logger instance, and whether logging is enabled.
loggingContext struct {
contextID LoggingID // Unique identifier for this logged operation
startTime time.Time // When the operation started (for duration calculation)
logger *slog.Logger // The logger instance to use for this operation
isEnabled bool // Whether logging is enabled for this operation
}
)
var (
// loggingContextKey is the singleton key used to store/retrieve logging data from context
loggingContextKey loggingContextKeyType
// loggingCounter is an atomic counter that generates unique LoggingIDs
loggingCounter atomic.Uint64
loggingContextValue = F.Bind2nd(context.Context.Value, any(loggingContextKey))
withLoggingContextValue = F.Bind2of3(context.WithValue)(any(loggingContextKey))
// getLoggingContext retrieves the logging information (start time and ID) from the context.
// It returns a Pair containing the start time and the logging ID.
// This function assumes the context contains logging information; it will panic if not present.
getLoggingContext = F.Flow3(
loggingContextValue,
option.ToType[loggingContext],
option.GetOrElse(getDefaultLoggingContext),
)
)
// getDefaultLoggingContext returns a default logging context with the global logger.
// This is used when no logging context is found in the context.Context.
func getDefaultLoggingContext() loggingContext {
return loggingContext{
logger: logging.GetLogger(),
}
}
// withLoggingContext creates an endomorphism that adds a logging context to a context.Context.
// This is used internally to store logging state in the context for retrieval by nested operations.
//
// Parameters:
// - lctx: The logging context to store
//
// Returns:
// - An endomorphism that adds the logging context to a context.Context
func withLoggingContext(lctx loggingContext) Endomorphism[context.Context] {
return F.Bind2nd(withLoggingContextValue, any(lctx))
}
// LogEntryExitF creates a customizable operator that wraps a ReaderIOResult computation with entry/exit callbacks.
//
// This is a more flexible version of LogEntryExit that allows you to provide custom callbacks for
// entry and exit events. The onEntry callback receives the current context and can return a modified
// context (e.g., with additional logging information). The onExit callback receives the computation
// result and can perform custom logging, metrics collection, or cleanup.
//
// The function uses the bracket pattern to ensure that:
// - The onEntry callback is executed before the computation starts
// - The computation runs with the context returned by onEntry
// - The onExit callback is executed after the computation completes (success or failure)
// - The original result is preserved and returned unchanged
// - Cleanup happens even if the computation fails
//
// Type Parameters:
// - A: The success type of the ReaderIOResult
// - ANY: The return type of the onExit callback (typically any)
//
// Parameters:
// - onEntry: A ReaderIO that receives the current context and returns a (possibly modified) context.
// This is executed before the computation starts. Use this for logging entry, adding context values,
// starting timers, or initialization logic.
// - onExit: A Kleisli function that receives the Result[A] and returns a ReaderIO[ANY].
// This is executed after the computation completes, regardless of success or failure.
// Use this for logging exit, recording metrics, cleanup, or finalization logic.
//
// Returns:
// - An Operator that wraps the ReaderIOResult computation with the custom entry/exit callbacks
//
// Example with custom context modification:
//
// type RequestID string
//
// logOp := LogEntryExitF[User, any](
// func(ctx context.Context) IO[context.Context] {
// return func() context.Context {
// reqID := RequestID(uuid.New().String())
// log.Printf("[%s] Starting operation", reqID)
// return context.WithValue(ctx, "requestID", reqID)
// }
// },
// func(res Result[User]) ReaderIO[any] {
// return func(ctx context.Context) IO[any] {
// return func() any {
// reqID := ctx.Value("requestID").(RequestID)
// return F.Pipe1(
// res,
// result.Fold(
// func(err error) any {
// log.Printf("[%s] Operation failed: %v", reqID, err)
// return nil
// },
// func(_ User) any {
// log.Printf("[%s] Operation succeeded", reqID)
// return nil
// },
// ),
// )
// }
// }
// },
// )
//
// wrapped := logOp(fetchUser(123))
//
// Example with metrics collection:
//
// import "github.com/prometheus/client_golang/prometheus"
//
// metricsOp := LogEntryExitF[Response, any](
// func(ctx context.Context) IO[context.Context] {
// return func() context.Context {
// requestCount.WithLabelValues("api_call", "started").Inc()
// return context.WithValue(ctx, "startTime", time.Now())
// }
// },
// func(res Result[Response]) ReaderIO[any] {
// return func(ctx context.Context) IO[any] {
// return func() any {
// startTime := ctx.Value("startTime").(time.Time)
// duration := time.Since(startTime).Seconds()
//
// return F.Pipe1(
// res,
// result.Fold(
// func(err error) any {
// requestCount.WithLabelValues("api_call", "error").Inc()
// requestDuration.WithLabelValues("api_call", "error").Observe(duration)
// return nil
// },
// func(_ Response) any {
// requestCount.WithLabelValues("api_call", "success").Inc()
// requestDuration.WithLabelValues("api_call", "success").Observe(duration)
// return nil
// },
// ),
// )
// }
// }
// },
// )
//
// Use Cases:
// - Custom context modification: Adding request IDs, trace IDs, or other context values
// - Structured logging: Integration with zap, logrus, or other structured loggers
// - Metrics collection: Recording operation durations, success/failure rates
// - Distributed tracing: OpenTelemetry, Jaeger integration
// - Custom monitoring: Application-specific monitoring and alerting
//
// Note: LogEntryExit is implemented using LogEntryExitF with standard logging and context management.
// Use LogEntryExitF when you need more control over the entry/exit behavior or context modification.
func LogEntryExitF[A, ANY any](
onEntry ReaderIO[context.Context],
onExit readerio.Kleisli[Result[A], ANY],
) Operator[A, A] {
bracket := F.Bind13of3(readerio.Bracket[context.Context, Result[A], ANY])(onEntry, func(newCtx context.Context, res Result[A]) ReaderIO[ANY] {
return readerio.FromIO(onExit(res)(newCtx)) // Get the exit callback for this result
})
return func(src ReaderIOResult[A]) ReaderIOResult[A] {
return bracket(F.Flow2(
src,
FromIOResult,
))
}
}
// onEntry creates a ReaderIO that handles the entry logging for an operation.
// It generates a unique logging ID, captures the start time, and logs the entry message.
// The logging context is stored in the context.Context for later retrieval.
//
// Parameters:
// - logLevel: The slog.Level to use for logging (e.g., slog.LevelInfo, slog.LevelDebug)
// - cb: Callback function to retrieve the logger from the context
// - nameAttr: The slog.Attr containing the operation name
//
// Returns:
// - A ReaderIO that prepares the context with logging information and logs the entry
func onEntry(
logLevel slog.Level,
cb func(context.Context) *slog.Logger,
nameAttr slog.Attr,
) ReaderIO[context.Context] {
return func(ctx context.Context) IO[context.Context] {
// logger
logger := cb(ctx)
return func() context.Context {
// check if the logger is enabled
if logger.Enabled(ctx, logLevel) {
// Generate unique logging ID and capture start time
contextID := LoggingID(loggingCounter.Add(1))
startTime := time.Now()
newLogger := logger.With("ID", contextID)
// log using ID
newLogger.LogAttrs(ctx, logLevel, "[entering]", nameAttr)
withCtx := withLoggingContext(loggingContext{
contextID: contextID,
startTime: startTime,
logger: newLogger,
isEnabled: true,
})
withLogger := logging.WithLogger(newLogger)
return withCtx(withLogger(ctx))
}
// logging disabled
withCtx := withLoggingContext(loggingContext{
logger: logger,
isEnabled: false,
})
return withCtx(ctx)
}
}
}
// onExitAny creates a Kleisli function that handles exit logging for an operation.
// It logs either success or error based on the Result, including the operation duration.
// Only logs if logging was enabled during entry (checked via loggingContext.isEnabled).
//
// Parameters:
// - logLevel: The slog.Level to use for logging
// - nameAttr: The slog.Attr containing the operation name
//
// Returns:
// - A Kleisli function that logs the exit/error and returns nil
func onExitAny(
logLevel slog.Level,
nameAttr slog.Attr,
) readerio.Kleisli[Result[any], any] {
return func(res Result[any]) ReaderIO[any] {
return func(ctx context.Context) IO[any] {
value := getLoggingContext(ctx)
if value.isEnabled {
return func() any {
// Retrieve logging information from context
durationAttr := slog.Duration("duration", time.Since(value.startTime))
// Log error with ID and duration
onError := func(err error) any {
value.logger.LogAttrs(ctx, logLevel, "[throwing]",
nameAttr,
durationAttr,
slog.Any("error", err))
return nil
}
// Log success with ID and duration
onSuccess := func(_ any) any {
value.logger.LogAttrs(ctx, logLevel, "[exiting ]", nameAttr, durationAttr)
return nil
}
return F.Pipe1(
res,
result.Fold(onError, onSuccess),
)
}
}
// nothing to do
return io.Of[any](nil)
}
}
}
// LogEntryExitWithCallback creates an operator that logs entry and exit of a ReaderIOResult computation
// using a custom logger callback and log level. This provides more control than LogEntryExit.
//
// This function allows you to:
// - Use a custom log level (Debug, Info, Warn, Error)
// - Retrieve the logger from the context using a custom callback
// - Control whether logging is enabled based on the logger's configuration
//
// Type Parameters:
// - A: The success type of the ReaderIOResult
//
// Parameters:
// - logLevel: The slog.Level to use for all log messages (entry, exit, error)
// - cb: Callback function to retrieve the *slog.Logger from the context
// - name: A descriptive name for the operation
//
// Returns:
// - An Operator that wraps the ReaderIOResult with customizable logging
//
// Example with custom log level:
//
// // Log at debug level
// debugOp := LogEntryExitWithCallback[User](
// slog.LevelDebug,
// logging.GetLoggerFromContext,
// "fetchUser",
// )
// result := debugOp(fetchUser(123))
//
// Example with custom logger callback:
//
// type loggerKey int
// const myLoggerKey loggerKey = 0
//
// getMyLogger := func(ctx context.Context) *slog.Logger {
// if logger := ctx.Value(myLoggerKey); logger != nil {
// return logger.(*slog.Logger)
// }
// return slog.Default()
// }
//
// customOp := LogEntryExitWithCallback[Data](
// slog.LevelInfo,
// getMyLogger,
// "processData",
// )
func LogEntryExitWithCallback[A any](
logLevel slog.Level,
cb func(context.Context) *slog.Logger,
name string) Operator[A, A] {
nameAttr := slog.String("name", name)
return LogEntryExitF(
onEntry(logLevel, cb, nameAttr),
F.Flow2(
result.MapTo[A, any](nil),
onExitAny(logLevel, nameAttr),
),
)
}
// LogEntryExit creates an operator that logs the entry and exit of a ReaderIOResult computation with timing and correlation IDs.
//
// This function wraps a ReaderIOResult computation with automatic logging that tracks:
// - Entry: Logs when the computation starts with "[entering <id>] <name>"
// - Exit: Logs when the computation completes successfully with "[exiting <id>] <name> [duration]"
// - Error: Logs when the computation fails with "[throwing <id>] <name> [duration]: <error>"
//
// Each logged operation is assigned a unique LoggingID (a monotonically increasing counter) that
// appears in all log messages for that operation. This ID enables correlation of entry and exit
// logs, even when multiple operations are running concurrently or are interleaved.
//
// The logging information (start time and ID) is stored in the context and can be retrieved using
// getLoggingContext or getLoggingID. This allows nested operations to access the parent operation's
// logging information.
//
// Type Parameters:
// - A: The success type of the ReaderIOResult
//
// Parameters:
// - name: A descriptive name for the computation, used in log messages to identify the operation
//
// Returns:
// - An Operator that wraps the ReaderIOResult computation with entry/exit logging
//
// The function uses the bracket pattern to ensure that:
// - Entry is logged before the computation starts
// - A unique LoggingID is assigned and stored in the context
// - Exit/error is logged after the computation completes, regardless of success or failure
// - Timing is accurate, measuring from entry to exit
// - The original result is preserved and returned unchanged
//
// Log Format:
// - Entry: "[entering <id>] <name>"
// - Success: "[exiting <id>] <name> [<duration>s]"
// - Error: "[throwing <id>] <name> [<duration>s]: <error>"
//
// Example with successful computation:
//
// fetchUser := func(id int) ReaderIOResult[User] {
// return Of(User{ID: id, Name: "Alice"})
// }
//
// // Wrap with logging
// loggedFetch := LogEntryExit[User]("fetchUser")(fetchUser(123))
//
// // Execute
// result := loggedFetch(context.Background())()
// // Logs:
// // [entering 1] fetchUser
// // [exiting 1] fetchUser [0.1s]
//
// Example with error:
//
// failingOp := func() ReaderIOResult[string] {
// return Left[string](errors.New("connection timeout"))
// }
//
// logged := LogEntryExit[string]("failingOp")(failingOp())
// result := logged(context.Background())()
// // Logs:
// // [entering 2] failingOp
// // [throwing 2] failingOp [0.0s]: connection timeout
//
// Example with nested operations:
//
// fetchOrders := func(userID int) ReaderIOResult[[]Order] {
// return Of([]Order{{ID: 1}})
// }
//
// pipeline := F.Pipe3(
// fetchUser(123),
// LogEntryExit[User]("fetchUser"),
// Chain(func(user User) ReaderIOResult[[]Order] {
// return fetchOrders(user.ID)
// }),
// LogEntryExit[[]Order]("fetchOrders"),
// )
//
// result := pipeline(context.Background())()
// // Logs:
// // [entering 3] fetchUser
// // [exiting 3] fetchUser [0.1s]
// // [entering 4] fetchOrders
// // [exiting 4] fetchOrders [0.2s]
//
// Example with concurrent operations:
//
// // Multiple operations can run concurrently, each with unique IDs
// op1 := LogEntryExit[Data]("operation1")(fetchData(1))
// op2 := LogEntryExit[Data]("operation2")(fetchData(2))
//
// go op1(context.Background())()
// go op2(context.Background())()
// // Logs (order may vary):
// // [entering 5] operation1
// // [entering 6] operation2
// // [exiting 5] operation1 [0.1s]
// // [exiting 6] operation2 [0.2s]
// // The IDs allow correlation even when logs are interleaved
//
// Use Cases:
// - Debugging: Track execution flow through complex ReaderIOResult chains with correlation IDs
// - Performance monitoring: Identify slow operations with timing information
// - Production logging: Monitor critical operations with unique identifiers
// - Concurrent operations: Correlate logs from multiple concurrent operations
// - Nested operations: Track parent-child relationships in operation hierarchies
// - Troubleshooting: Quickly identify where errors occur and correlate with entry logs
//
//go:inline
func LogEntryExit[A any](name string) Operator[A, A] {
return LogEntryExitWithCallback[A](slog.LevelInfo, logging.GetLoggerFromContext, name)
}
func curriedLog(
logLevel slog.Level,
cb func(context.Context) *slog.Logger,
message string) func(slog.Attr) func(context.Context) func() struct{} {
return F.Curry2(func(a slog.Attr, ctx context.Context) func() struct{} {
logger := cb(ctx)
return func() struct{} {
logger.LogAttrs(ctx, logLevel, message, a)
return struct{}{}
}
})
}
// SLogWithCallback creates a Kleisli arrow that logs a Result value (success or error) with a custom logger and log level.
//
// This function logs both successful values and errors, making it useful for debugging and monitoring
// Result values as they flow through a computation. Unlike TapSLog which only logs successful values,
// SLogWithCallback logs the Result regardless of whether it contains a value or an error.
//
// The logged output includes:
// - For success: The message with the value as a structured "value" attribute
// - For error: The message with the error as a structured "error" attribute
//
// The Result is passed through unchanged after logging.
//
// Type Parameters:
// - A: The success type of the Result
//
// Parameters:
// - logLevel: The slog.Level to use for logging (e.g., slog.LevelInfo, slog.LevelDebug)
// - cb: Callback function to retrieve the *slog.Logger from the context
// - message: A descriptive message to include in the log entry
//
// Returns:
// - A Kleisli arrow that logs the Result (value or error) and returns it unchanged
//
// Example with custom log level:
//
// debugLog := SLogWithCallback[User](
// slog.LevelDebug,
// logging.GetLoggerFromContext,
// "User result",
// )
//
// pipeline := F.Pipe2(
// fetchUser(123),
// Chain(debugLog),
// Map(func(u User) string { return u.Name }),
// )
//
// Example with custom logger:
//
// type loggerKey int
// const myLoggerKey loggerKey = 0
//
// getMyLogger := func(ctx context.Context) *slog.Logger {
// if logger := ctx.Value(myLoggerKey); logger != nil {
// return logger.(*slog.Logger)
// }
// return slog.Default()
// }
//
// customLog := SLogWithCallback[Data](
// slog.LevelWarn,
// getMyLogger,
// "Data processing result",
// )
//
// Use Cases:
// - Debugging: Log both successful and failed Results in a pipeline
// - Error tracking: Monitor error occurrences with custom log levels
// - Custom logging: Use application-specific loggers and log levels
// - Conditional logging: Enable/disable logging based on logger configuration
func SLogWithCallback[A any](
logLevel slog.Level,
cb func(context.Context) *slog.Logger,
message string) Kleisli[Result[A], A] {
return F.Pipe1(
F.Flow2(
// create the attribute to log depending on the condition
result.ToSLogAttr[A](),
// create an `IO` that logs the attribute
curriedLog(logLevel, cb, message),
),
// preserve the original context
reader.Chain(reader.Sequence(readerio.MapTo[struct{}, Result[A]])),
)
}
// SLog creates a Kleisli arrow that logs a Result value (success or error) with a message.
//
// This function logs both successful values and errors at Info level using the logger from the context.
// It's a convenience wrapper around SLogWithCallback with standard settings.
//
// The logged output includes:
// - For success: The message with the value as a structured "value" attribute
// - For error: The message with the error as a structured "error" attribute
//
// The Result is passed through unchanged after logging, making this function transparent in the
// computation pipeline.
//
// Type Parameters:
// - A: The success type of the Result
//
// Parameters:
// - message: A descriptive message to include in the log entry
//
// Returns:
// - A Kleisli arrow that logs the Result (value or error) and returns it unchanged
//
// Example with successful Result:
//
// pipeline := F.Pipe2(
// fetchUser(123),
// Chain(SLog[User]("Fetched user")),
// Map(func(u User) string { return u.Name }),
// )
//
// result := pipeline(context.Background())()
// // If successful, logs: "Fetched user" value={ID:123 Name:"Alice"}
// // If error, logs: "Fetched user" error="user not found"
//
// Example in error handling pipeline:
//
// pipeline := F.Pipe3(
// fetchData(id),
// Chain(SLog[Data]("Data fetched")),
// Chain(validateData),
// Chain(SLog[Data]("Data validated")),
// Chain(processData),
// )
//
// // Logs each step, including errors:
// // "Data fetched" value={...} or error="..."
// // "Data validated" value={...} or error="..."
//
// Use Cases:
// - Debugging: Track both successful and failed Results in a pipeline
// - Error monitoring: Log errors as they occur in the computation
// - Flow tracking: See the progression of Results through a pipeline
// - Troubleshooting: Identify where errors are introduced or propagated
//
// Note: This function logs the Result itself (which may contain an error), not just successful values.
// For logging only successful values, use TapSLog instead.
//
//go:inline
func SLog[A any](message string) Kleisli[Result[A], A] {
return SLogWithCallback[A](slog.LevelInfo, logging.GetLoggerFromContext, message)
}
// TapSLog creates an operator that logs only successful values with a message and passes them through unchanged.
//
// This function is useful for debugging and monitoring values as they flow through a ReaderIOResult
// computation chain. Unlike SLog which logs both successes and errors, TapSLog only logs when the
// computation is successful. If the computation contains an error, no logging occurs and the error
// is propagated unchanged.
//
// The logged output includes:
// - The provided message
// - The value being passed through (as a structured "value" attribute)
//
// Type Parameters:
// - A: The type of the value to log and pass through
//
// Parameters:
// - message: A descriptive message to include in the log entry
//
// Returns:
// - An Operator that logs successful values and returns them unchanged
//
// Example with simple value logging:
//
// fetchUser := func(id int) ReaderIOResult[User] {
// return Of(User{ID: id, Name: "Alice"})
// }
//
// pipeline := F.Pipe2(
// fetchUser(123),
// TapSLog[User]("Fetched user"),
// Map(func(u User) string { return u.Name }),
// )
//
// result := pipeline(context.Background())()
// // Logs: "Fetched user" value={ID:123 Name:"Alice"}
// // Returns: result.Of("Alice")
//
// Example in a processing pipeline:
//
// processOrder := F.Pipe4(
// fetchOrder(orderId),
// TapSLog[Order]("Order fetched"),
// Chain(validateOrder),
// TapSLog[Order]("Order validated"),
// Chain(processPayment),
// TapSLog[Payment]("Payment processed"),
// )
//
// result := processOrder(context.Background())()
// // Logs each successful step with the intermediate values
// // If any step fails, subsequent TapSLog calls don't log
//
// Example with error handling:
//
// pipeline := F.Pipe3(
// fetchData(id),
// TapSLog[Data]("Data fetched"),
// Chain(func(d Data) ReaderIOResult[Result] {
// if d.IsValid() {
// return Of(processData(d))
// }
// return Left[Result](errors.New("invalid data"))
// }),
// TapSLog[Result]("Data processed"),
// )
//
// // If fetchData succeeds: logs "Data fetched" with the data
// // If processing succeeds: logs "Data processed" with the result
// // If processing fails: "Data processed" is NOT logged (error propagates)
//
// Use Cases:
// - Debugging: Inspect intermediate successful values in a computation pipeline
// - Monitoring: Track successful data flow through complex operations
// - Troubleshooting: Identify where successful computations stop (last logged value before error)
// - Auditing: Log important successful values for compliance or security
// - Development: Understand data transformations during development
//
// Note: This function only logs successful values. Errors are silently propagated without logging.
// For logging both successes and errors, use SLog instead.
//
//go:inline
func TapSLog[A any](message string) Operator[A, A] {
return readerio.ChainFirst(SLog[A](message))
}

View File

@@ -0,0 +1,662 @@
package readerioresult
import (
"bytes"
"context"
"errors"
"log/slog"
"strconv"
"strings"
"testing"
"time"
F "github.com/IBM/fp-go/v2/function"
"github.com/IBM/fp-go/v2/logging"
N "github.com/IBM/fp-go/v2/number"
"github.com/IBM/fp-go/v2/result"
S "github.com/IBM/fp-go/v2/string"
"github.com/stretchr/testify/assert"
)
// TestLoggingContext tests basic nested logging with correlation IDs
func TestLoggingContext(t *testing.T) {
data := F.Pipe2(
Of("Sample"),
LogEntryExit[string]("TestLoggingContext1"),
LogEntryExit[string]("TestLoggingContext2"),
)
assert.Equal(t, result.Of("Sample"), data(context.Background())())
}
// TestLogEntryExitSuccess tests successful operation logging
func TestLogEntryExitSuccess(t *testing.T) {
var buf bytes.Buffer
logger := slog.New(slog.NewTextHandler(&buf, &slog.HandlerOptions{
Level: slog.LevelInfo,
}))
oldLogger := logging.SetLogger(logger)
defer logging.SetLogger(oldLogger)
operation := F.Pipe1(
Of("success value"),
LogEntryExit[string]("TestOperation"),
)
res := operation(context.Background())()
assert.Equal(t, result.Of("success value"), res)
logOutput := buf.String()
assert.Contains(t, logOutput, "[entering]")
assert.Contains(t, logOutput, "[exiting ]")
assert.Contains(t, logOutput, "TestOperation")
assert.Contains(t, logOutput, "ID=")
assert.Contains(t, logOutput, "duration=")
}
// TestLogEntryExitError tests error operation logging
func TestLogEntryExitError(t *testing.T) {
var buf bytes.Buffer
logger := slog.New(slog.NewTextHandler(&buf, &slog.HandlerOptions{
Level: slog.LevelInfo,
}))
oldLogger := logging.SetLogger(logger)
defer logging.SetLogger(oldLogger)
testErr := errors.New("test error")
operation := F.Pipe1(
Left[string](testErr),
LogEntryExit[string]("FailingOperation"),
)
res := operation(context.Background())()
assert.True(t, result.IsLeft(res))
logOutput := buf.String()
assert.Contains(t, logOutput, "[entering]")
assert.Contains(t, logOutput, "[throwing]")
assert.Contains(t, logOutput, "FailingOperation")
assert.Contains(t, logOutput, "test error")
assert.Contains(t, logOutput, "ID=")
assert.Contains(t, logOutput, "duration=")
}
// TestLogEntryExitNested tests nested operations with different IDs
func TestLogEntryExitNested(t *testing.T) {
var buf bytes.Buffer
logger := slog.New(slog.NewTextHandler(&buf, &slog.HandlerOptions{
Level: slog.LevelInfo,
}))
oldLogger := logging.SetLogger(logger)
defer logging.SetLogger(oldLogger)
innerOp := F.Pipe1(
Of("inner"),
LogEntryExit[string]("InnerOp"),
)
outerOp := F.Pipe2(
Of("outer"),
LogEntryExit[string]("OuterOp"),
Chain(func(s string) ReaderIOResult[string] {
return innerOp
}),
)
res := outerOp(context.Background())()
assert.True(t, result.IsRight(res))
logOutput := buf.String()
// Should have two different IDs
assert.Contains(t, logOutput, "OuterOp")
assert.Contains(t, logOutput, "InnerOp")
// Count entering and exiting logs
enterCount := strings.Count(logOutput, "[entering]")
exitCount := strings.Count(logOutput, "[exiting ]")
assert.Equal(t, 2, enterCount, "Should have 2 entering logs")
assert.Equal(t, 2, exitCount, "Should have 2 exiting logs")
}
// TestLogEntryExitWithCallback tests custom log level and callback
func TestLogEntryExitWithCallback(t *testing.T) {
var buf bytes.Buffer
logger := slog.New(slog.NewTextHandler(&buf, &slog.HandlerOptions{
Level: slog.LevelDebug,
}))
customCallback := func(ctx context.Context) *slog.Logger {
return logger
}
operation := F.Pipe1(
Of(42),
LogEntryExitWithCallback[int](slog.LevelDebug, customCallback, "DebugOperation"),
)
res := operation(context.Background())()
assert.Equal(t, result.Of(42), res)
logOutput := buf.String()
assert.Contains(t, logOutput, "[entering]")
assert.Contains(t, logOutput, "[exiting ]")
assert.Contains(t, logOutput, "DebugOperation")
assert.Contains(t, logOutput, "level=DEBUG")
}
// TestLogEntryExitDisabled tests that logging can be disabled
func TestLogEntryExitDisabled(t *testing.T) {
var buf bytes.Buffer
// Create logger with level that disables info logs
logger := slog.New(slog.NewTextHandler(&buf, &slog.HandlerOptions{
Level: slog.LevelError, // Only log errors
}))
oldLogger := logging.SetLogger(logger)
defer logging.SetLogger(oldLogger)
operation := F.Pipe1(
Of("value"),
LogEntryExit[string]("DisabledOperation"),
)
res := operation(context.Background())()
assert.True(t, result.IsRight(res))
// Should have no logs since level is ERROR
logOutput := buf.String()
assert.Empty(t, logOutput, "Should have no logs when logging is disabled")
}
// TestLogEntryExitF tests custom entry/exit callbacks
func TestLogEntryExitF(t *testing.T) {
var entryCount, exitCount int
onEntry := func(ctx context.Context) IO[context.Context] {
return func() context.Context {
entryCount++
return ctx
}
}
onExit := func(res Result[string]) ReaderIO[any] {
return func(ctx context.Context) IO[any] {
return func() any {
exitCount++
return nil
}
}
}
operation := F.Pipe1(
Of("test"),
LogEntryExitF(onEntry, onExit),
)
res := operation(context.Background())()
assert.True(t, result.IsRight(res))
assert.Equal(t, 1, entryCount, "Entry callback should be called once")
assert.Equal(t, 1, exitCount, "Exit callback should be called once")
}
// TestLogEntryExitFWithError tests custom callbacks with error
func TestLogEntryExitFWithError(t *testing.T) {
var entryCount, exitCount int
var capturedError error
onEntry := func(ctx context.Context) IO[context.Context] {
return func() context.Context {
entryCount++
return ctx
}
}
onExit := func(res Result[string]) ReaderIO[any] {
return func(ctx context.Context) IO[any] {
return func() any {
exitCount++
if result.IsLeft(res) {
_, capturedError = result.Unwrap(res)
}
return nil
}
}
}
testErr := errors.New("custom error")
operation := F.Pipe1(
Left[string](testErr),
LogEntryExitF(onEntry, onExit),
)
res := operation(context.Background())()
assert.True(t, result.IsLeft(res))
assert.Equal(t, 1, entryCount, "Entry callback should be called once")
assert.Equal(t, 1, exitCount, "Exit callback should be called once")
assert.Equal(t, testErr, capturedError, "Should capture the error")
}
// TestLoggingIDUniqueness tests that logging IDs are unique
func TestLoggingIDUniqueness(t *testing.T) {
var buf bytes.Buffer
logger := slog.New(slog.NewTextHandler(&buf, &slog.HandlerOptions{
Level: slog.LevelInfo,
}))
oldLogger := logging.SetLogger(logger)
defer logging.SetLogger(oldLogger)
// Run multiple operations
for i := range 5 {
op := F.Pipe1(
Of(i),
LogEntryExit[int]("Operation"),
)
op(context.Background())()
}
logOutput := buf.String()
// Extract all IDs and verify they're unique
lines := strings.Split(logOutput, "\n")
ids := make(map[string]bool)
for _, line := range lines {
if strings.Contains(line, "ID=") {
// Extract ID value
parts := strings.Split(line, "ID=")
if len(parts) > 1 {
idPart := strings.Fields(parts[1])[0]
ids[idPart] = true
}
}
}
// Should have 5 unique IDs (one per operation)
assert.GreaterOrEqual(t, len(ids), 5, "Should have at least 5 unique IDs")
}
// TestLogEntryExitWithContextLogger tests using logger from context
func TestLogEntryExitWithContextLogger(t *testing.T) {
var buf bytes.Buffer
contextLogger := slog.New(slog.NewTextHandler(&buf, &slog.HandlerOptions{
Level: slog.LevelInfo,
}))
ctx := logging.WithLogger(contextLogger)(context.Background())
operation := F.Pipe1(
Of("context value"),
LogEntryExit[string]("ContextOperation"),
)
res := operation(ctx)()
assert.True(t, result.IsRight(res))
logOutput := buf.String()
assert.Contains(t, logOutput, "[entering]")
assert.Contains(t, logOutput, "[exiting ]")
assert.Contains(t, logOutput, "ContextOperation")
}
// TestLogEntryExitTiming tests that duration is captured
func TestLogEntryExitTiming(t *testing.T) {
var buf bytes.Buffer
logger := slog.New(slog.NewTextHandler(&buf, &slog.HandlerOptions{
Level: slog.LevelInfo,
}))
oldLogger := logging.SetLogger(logger)
defer logging.SetLogger(oldLogger)
// Operation with delay
slowOp := func(ctx context.Context) IOResult[string] {
return func() Result[string] {
time.Sleep(10 * time.Millisecond)
return result.Of("done")
}
}
operation := F.Pipe1(
slowOp,
LogEntryExit[string]("SlowOperation"),
)
res := operation(context.Background())()
assert.True(t, result.IsRight(res))
logOutput := buf.String()
assert.Contains(t, logOutput, "duration=")
// Verify duration is present in exit log
lines := strings.Split(logOutput, "\n")
var foundDuration bool
for _, line := range lines {
if strings.Contains(line, "[exiting ]") && strings.Contains(line, "duration=") {
foundDuration = true
break
}
}
assert.True(t, foundDuration, "Exit log should contain duration")
}
// TestLogEntryExitChainedOperations tests complex chained operations
func TestLogEntryExitChainedOperations(t *testing.T) {
var buf bytes.Buffer
logger := slog.New(slog.NewTextHandler(&buf, &slog.HandlerOptions{
Level: slog.LevelInfo,
}))
oldLogger := logging.SetLogger(logger)
defer logging.SetLogger(oldLogger)
step1 := F.Pipe1(
Of(1),
LogEntryExit[int]("Step1"),
)
step2 := F.Flow3(
N.Mul(2),
Of,
LogEntryExit[int]("Step2"),
)
step3 := F.Flow3(
strconv.Itoa,
Of,
LogEntryExit[string]("Step3"),
)
pipeline := F.Pipe1(
step1,
Chain(F.Flow2(
step2,
Chain(step3),
)),
)
res := pipeline(context.Background())()
assert.Equal(t, result.Of("2"), res)
logOutput := buf.String()
assert.Contains(t, logOutput, "Step1")
assert.Contains(t, logOutput, "Step2")
assert.Contains(t, logOutput, "Step3")
// Verify all steps completed
assert.Equal(t, 3, strings.Count(logOutput, "[entering]"))
assert.Equal(t, 3, strings.Count(logOutput, "[exiting ]"))
}
// TestTapSLog tests basic TapSLog functionality
func TestTapSLog(t *testing.T) {
var buf bytes.Buffer
logger := slog.New(slog.NewTextHandler(&buf, &slog.HandlerOptions{
Level: slog.LevelInfo,
}))
oldLogger := logging.SetLogger(logger)
defer logging.SetLogger(oldLogger)
operation := F.Pipe2(
Of(42),
TapSLog[int]("Processing value"),
Map(N.Mul(2)),
)
res := operation(context.Background())()
assert.Equal(t, result.Of(84), res)
logOutput := buf.String()
assert.Contains(t, logOutput, "Processing value")
assert.Contains(t, logOutput, "value=42")
}
// TestTapSLogInPipeline tests TapSLog in a multi-step pipeline
func TestTapSLogInPipeline(t *testing.T) {
var buf bytes.Buffer
logger := slog.New(slog.NewTextHandler(&buf, &slog.HandlerOptions{
Level: slog.LevelInfo,
}))
oldLogger := logging.SetLogger(logger)
defer logging.SetLogger(oldLogger)
step1 := F.Pipe2(
Of("hello"),
TapSLog[string]("Step 1: Initial value"),
Map(func(s string) string { return s + " world" }),
)
step2 := F.Pipe2(
step1,
TapSLog[string]("Step 2: After concatenation"),
Map(S.Size),
)
pipeline := F.Pipe1(
step2,
TapSLog[int]("Step 3: Final length"),
)
res := pipeline(context.Background())()
assert.Equal(t, result.Of(11), res)
logOutput := buf.String()
assert.Contains(t, logOutput, "Step 1: Initial value")
assert.Contains(t, logOutput, "value=hello")
assert.Contains(t, logOutput, "Step 2: After concatenation")
assert.Contains(t, logOutput, `value="hello world"`)
assert.Contains(t, logOutput, "Step 3: Final length")
assert.Contains(t, logOutput, "value=11")
}
// TestTapSLogWithError tests that TapSLog logs errors (via SLog)
func TestTapSLogWithError(t *testing.T) {
var buf bytes.Buffer
logger := slog.New(slog.NewTextHandler(&buf, &slog.HandlerOptions{
Level: slog.LevelInfo,
}))
oldLogger := logging.SetLogger(logger)
defer logging.SetLogger(oldLogger)
testErr := errors.New("computation failed")
pipeline := F.Pipe2(
Left[int](testErr),
TapSLog[int]("Error logged"),
Map(N.Mul(2)),
)
res := pipeline(context.Background())()
assert.True(t, result.IsLeft(res))
logOutput := buf.String()
// TapSLog uses SLog internally, which logs both successes and errors
assert.Contains(t, logOutput, "Error logged")
assert.Contains(t, logOutput, "error")
assert.Contains(t, logOutput, "computation failed")
}
// TestTapSLogWithStruct tests TapSLog with structured data
func TestTapSLogWithStruct(t *testing.T) {
var buf bytes.Buffer
logger := slog.New(slog.NewTextHandler(&buf, &slog.HandlerOptions{
Level: slog.LevelInfo,
}))
oldLogger := logging.SetLogger(logger)
defer logging.SetLogger(oldLogger)
type User struct {
ID int
Name string
}
user := User{ID: 123, Name: "Alice"}
operation := F.Pipe2(
Of(user),
TapSLog[User]("User data"),
Map(func(u User) string { return u.Name }),
)
res := operation(context.Background())()
assert.Equal(t, result.Of("Alice"), res)
logOutput := buf.String()
assert.Contains(t, logOutput, "User data")
assert.Contains(t, logOutput, "ID:123")
assert.Contains(t, logOutput, "Name:Alice")
}
// TestTapSLogDisabled tests that TapSLog respects logger level
func TestTapSLogDisabled(t *testing.T) {
var buf bytes.Buffer
// Create logger with level that disables info logs
logger := slog.New(slog.NewTextHandler(&buf, &slog.HandlerOptions{
Level: slog.LevelError, // Only log errors
}))
oldLogger := logging.SetLogger(logger)
defer logging.SetLogger(oldLogger)
operation := F.Pipe2(
Of(42),
TapSLog[int]("This should not be logged"),
Map(N.Mul(2)),
)
res := operation(context.Background())()
assert.Equal(t, result.Of(84), res)
// Should have no logs since level is ERROR
logOutput := buf.String()
assert.Empty(t, logOutput, "Should have no logs when logging is disabled")
}
// TestTapSLogWithContextLogger tests TapSLog using logger from context
func TestTapSLogWithContextLogger(t *testing.T) {
var buf bytes.Buffer
contextLogger := slog.New(slog.NewTextHandler(&buf, &slog.HandlerOptions{
Level: slog.LevelInfo,
}))
ctx := logging.WithLogger(contextLogger)(context.Background())
operation := F.Pipe2(
Of("test value"),
TapSLog[string]("Context logger test"),
Map(S.Size),
)
res := operation(ctx)()
assert.Equal(t, result.Of(10), res)
logOutput := buf.String()
assert.Contains(t, logOutput, "Context logger test")
assert.Contains(t, logOutput, `value="test value"`)
}
// TestSLogLogsSuccessValue tests that SLog logs successful Result values
func TestSLogLogsSuccessValue(t *testing.T) {
var buf bytes.Buffer
logger := slog.New(slog.NewTextHandler(&buf, &slog.HandlerOptions{
Level: slog.LevelInfo,
}))
oldLogger := logging.SetLogger(logger)
defer logging.SetLogger(oldLogger)
ctx := context.Background()
// Create a Result and log it
res1 := result.Of(42)
logged := SLog[int]("Result value")(res1)(ctx)()
assert.Equal(t, result.Of(42), logged)
logOutput := buf.String()
assert.Contains(t, logOutput, "Result value")
assert.Contains(t, logOutput, "value=42")
}
// TestSLogLogsErrorValue tests that SLog logs error Result values
func TestSLogLogsErrorValue(t *testing.T) {
var buf bytes.Buffer
logger := slog.New(slog.NewTextHandler(&buf, &slog.HandlerOptions{
Level: slog.LevelInfo,
}))
oldLogger := logging.SetLogger(logger)
defer logging.SetLogger(oldLogger)
ctx := context.Background()
testErr := errors.New("test error")
// Create an error Result and log it
res1 := result.Left[int](testErr)
logged := SLog[int]("Result value")(res1)(ctx)()
assert.True(t, result.IsLeft(logged))
logOutput := buf.String()
assert.Contains(t, logOutput, "Result value")
assert.Contains(t, logOutput, "error")
assert.Contains(t, logOutput, "test error")
}
// TestSLogWithCallbackCustomLevel tests SLogWithCallback with custom log level
func TestSLogWithCallbackCustomLevel(t *testing.T) {
var buf bytes.Buffer
logger := slog.New(slog.NewTextHandler(&buf, &slog.HandlerOptions{
Level: slog.LevelDebug,
}))
customCallback := func(ctx context.Context) *slog.Logger {
return logger
}
ctx := context.Background()
// Create a Result and log it with custom callback
res1 := result.Of(42)
logged := SLogWithCallback[int](slog.LevelDebug, customCallback, "Debug result")(res1)(ctx)()
assert.Equal(t, result.Of(42), logged)
logOutput := buf.String()
assert.Contains(t, logOutput, "Debug result")
assert.Contains(t, logOutput, "value=42")
assert.Contains(t, logOutput, "level=DEBUG")
}
// TestSLogWithCallbackLogsError tests SLogWithCallback logs errors
func TestSLogWithCallbackLogsError(t *testing.T) {
var buf bytes.Buffer
logger := slog.New(slog.NewTextHandler(&buf, &slog.HandlerOptions{
Level: slog.LevelWarn,
}))
customCallback := func(ctx context.Context) *slog.Logger {
return logger
}
ctx := context.Background()
testErr := errors.New("warning error")
// Create an error Result and log it with custom callback
res1 := result.Left[int](testErr)
logged := SLogWithCallback[int](slog.LevelWarn, customCallback, "Warning result")(res1)(ctx)()
assert.True(t, result.IsLeft(logged))
logOutput := buf.String()
assert.Contains(t, logOutput, "Warning result")
assert.Contains(t, logOutput, "error")
assert.Contains(t, logOutput, "warning error")
assert.Contains(t, logOutput, "level=WARN")
}

View File

@@ -19,6 +19,7 @@ import (
"context"
"time"
"github.com/IBM/fp-go/v2/context/readerio"
"github.com/IBM/fp-go/v2/context/readerresult"
"github.com/IBM/fp-go/v2/either"
"github.com/IBM/fp-go/v2/errors"
@@ -26,10 +27,11 @@ import (
"github.com/IBM/fp-go/v2/io"
"github.com/IBM/fp-go/v2/ioeither"
"github.com/IBM/fp-go/v2/ioresult"
"github.com/IBM/fp-go/v2/option"
"github.com/IBM/fp-go/v2/reader"
"github.com/IBM/fp-go/v2/readerio"
RIOR "github.com/IBM/fp-go/v2/readerioresult"
"github.com/IBM/fp-go/v2/readeroption"
"github.com/IBM/fp-go/v2/result"
)
const (
@@ -150,7 +152,7 @@ func MapTo[A, B any](b B) Operator[A, B] {
//
//go:inline
func MonadChain[A, B any](ma ReaderIOResult[A], f Kleisli[A, B]) ReaderIOResult[B] {
return RIOR.MonadChain(ma, f)
return RIOR.MonadChain(ma, WithContextK(f))
}
// Chain sequences two [ReaderIOResult] computations, where the second depends on the result of the first.
@@ -163,7 +165,7 @@ func MonadChain[A, B any](ma ReaderIOResult[A], f Kleisli[A, B]) ReaderIOResult[
//
//go:inline
func Chain[A, B any](f Kleisli[A, B]) Operator[A, B] {
return RIOR.Chain(f)
return RIOR.Chain(WithContextK(f))
}
// MonadChainFirst sequences two [ReaderIOResult] computations but returns the result of the first.
@@ -177,7 +179,12 @@ func Chain[A, B any](f Kleisli[A, B]) Operator[A, B] {
//
//go:inline
func MonadChainFirst[A, B any](ma ReaderIOResult[A], f Kleisli[A, B]) ReaderIOResult[A] {
return RIOR.MonadChainFirst(ma, f)
return RIOR.MonadChainFirst(ma, WithContextK(f))
}
//go:inline
func MonadTap[A, B any](ma ReaderIOResult[A], f Kleisli[A, B]) ReaderIOResult[A] {
return RIOR.MonadTap(ma, WithContextK(f))
}
// ChainFirst sequences two [ReaderIOResult] computations but returns the result of the first.
@@ -190,7 +197,12 @@ func MonadChainFirst[A, B any](ma ReaderIOResult[A], f Kleisli[A, B]) ReaderIORe
//
//go:inline
func ChainFirst[A, B any](f Kleisli[A, B]) Operator[A, A] {
return RIOR.ChainFirst(f)
return RIOR.ChainFirst(WithContextK(f))
}
//go:inline
func Tap[A, B any](f Kleisli[A, B]) Operator[A, A] {
return RIOR.Tap(WithContextK(f))
}
// Of creates a [ReaderIOResult] that always succeeds with the given value.
@@ -233,14 +245,14 @@ func MonadApPar[B, A any](fab ReaderIOResult[func(A) B], fa ReaderIOResult[A]) R
return func(ctx context.Context) IOResult[B] {
// quick check for cancellation
if err := context.Cause(ctx); err != nil {
return ioeither.Left[B](err)
if ctx.Err() != nil {
return ioeither.Left[B](context.Cause(ctx))
}
return func() Result[B] {
// quick check for cancellation
if err := context.Cause(ctx); err != nil {
return either.Left[B](err)
if ctx.Err() != nil {
return either.Left[B](context.Cause(ctx))
}
// create sub-contexts for fa and fab, so they can cancel one other
@@ -372,7 +384,7 @@ func Ask() ReaderIOResult[context.Context] {
// Returns a new ReaderIOResult with the chained computation.
//
//go:inline
func MonadChainEitherK[A, B any](ma ReaderIOResult[A], f func(A) Either[B]) ReaderIOResult[B] {
func MonadChainEitherK[A, B any](ma ReaderIOResult[A], f either.Kleisli[error, A, B]) ReaderIOResult[B] {
return RIOR.MonadChainEitherK(ma, f)
}
@@ -385,7 +397,12 @@ func MonadChainEitherK[A, B any](ma ReaderIOResult[A], f func(A) Either[B]) Read
// Returns a function that chains the Either-returning function.
//
//go:inline
func ChainEitherK[A, B any](f func(A) Either[B]) Operator[A, B] {
func ChainEitherK[A, B any](f either.Kleisli[error, A, B]) Operator[A, B] {
return RIOR.ChainEitherK[context.Context](f)
}
//go:inline
func ChainResultK[A, B any](f either.Kleisli[error, A, B]) Operator[A, B] {
return RIOR.ChainEitherK[context.Context](f)
}
@@ -399,10 +416,15 @@ func ChainEitherK[A, B any](f func(A) Either[B]) Operator[A, B] {
// Returns a ReaderIOResult with the original value if both computations succeed.
//
//go:inline
func MonadChainFirstEitherK[A, B any](ma ReaderIOResult[A], f func(A) Either[B]) ReaderIOResult[A] {
func MonadChainFirstEitherK[A, B any](ma ReaderIOResult[A], f either.Kleisli[error, A, B]) ReaderIOResult[A] {
return RIOR.MonadChainFirstEitherK(ma, f)
}
//go:inline
func MonadTapEitherK[A, B any](ma ReaderIOResult[A], f either.Kleisli[error, A, B]) ReaderIOResult[A] {
return RIOR.MonadTapEitherK(ma, f)
}
// ChainFirstEitherK chains a function that returns an [Either] but keeps the original value.
// This is the curried version of [MonadChainFirstEitherK].
//
@@ -412,10 +434,15 @@ func MonadChainFirstEitherK[A, B any](ma ReaderIOResult[A], f func(A) Either[B])
// Returns a function that chains the Either-returning function.
//
//go:inline
func ChainFirstEitherK[A, B any](f func(A) Either[B]) Operator[A, A] {
func ChainFirstEitherK[A, B any](f either.Kleisli[error, A, B]) Operator[A, A] {
return RIOR.ChainFirstEitherK[context.Context](f)
}
//go:inline
func TapEitherK[A, B any](f either.Kleisli[error, A, B]) Operator[A, A] {
return RIOR.TapEitherK[context.Context](f)
}
// ChainOptionK chains a function that returns an [Option] into a [ReaderIOResult] computation.
// If the Option is None, the provided error function is called.
//
@@ -425,7 +452,7 @@ func ChainFirstEitherK[A, B any](f func(A) Either[B]) Operator[A, A] {
// Returns a function that chains Option-returning functions into ReaderIOResult.
//
//go:inline
func ChainOptionK[A, B any](onNone func() error) func(func(A) Option[B]) Operator[A, B] {
func ChainOptionK[A, B any](onNone func() error) func(option.Kleisli[A, B]) Operator[A, B] {
return RIOR.ChainOptionK[context.Context, A, B](onNone)
}
@@ -507,7 +534,7 @@ func Never[A any]() ReaderIOResult[A] {
// Returns a new ReaderIOResult with the chained IO computation.
//
//go:inline
func MonadChainIOK[A, B any](ma ReaderIOResult[A], f func(A) IO[B]) ReaderIOResult[B] {
func MonadChainIOK[A, B any](ma ReaderIOResult[A], f io.Kleisli[A, B]) ReaderIOResult[B] {
return RIOR.MonadChainIOK(ma, f)
}
@@ -520,7 +547,7 @@ func MonadChainIOK[A, B any](ma ReaderIOResult[A], f func(A) IO[B]) ReaderIOResu
// Returns a function that chains the IO-returning function.
//
//go:inline
func ChainIOK[A, B any](f func(A) IO[B]) Operator[A, B] {
func ChainIOK[A, B any](f io.Kleisli[A, B]) Operator[A, B] {
return RIOR.ChainIOK[context.Context](f)
}
@@ -534,10 +561,15 @@ func ChainIOK[A, B any](f func(A) IO[B]) Operator[A, B] {
// Returns a ReaderIOResult with the original value after executing the IO.
//
//go:inline
func MonadChainFirstIOK[A, B any](ma ReaderIOResult[A], f func(A) IO[B]) ReaderIOResult[A] {
func MonadChainFirstIOK[A, B any](ma ReaderIOResult[A], f io.Kleisli[A, B]) ReaderIOResult[A] {
return RIOR.MonadChainFirstIOK(ma, f)
}
//go:inline
func MonadTapIOK[A, B any](ma ReaderIOResult[A], f io.Kleisli[A, B]) ReaderIOResult[A] {
return RIOR.MonadTapIOK(ma, f)
}
// ChainFirstIOK chains a function that returns an [IO] but keeps the original value.
// This is the curried version of [MonadChainFirstIOK].
//
@@ -547,10 +579,15 @@ func MonadChainFirstIOK[A, B any](ma ReaderIOResult[A], f func(A) IO[B]) ReaderI
// Returns a function that chains the IO-returning function.
//
//go:inline
func ChainFirstIOK[A, B any](f func(A) IO[B]) Operator[A, A] {
func ChainFirstIOK[A, B any](f io.Kleisli[A, B]) Operator[A, A] {
return RIOR.ChainFirstIOK[context.Context](f)
}
//go:inline
func TapIOK[A, B any](f io.Kleisli[A, B]) Operator[A, A] {
return RIOR.TapIOK[context.Context](f)
}
// ChainIOEitherK chains a function that returns an [IOResult] into a [ReaderIOResult] computation.
// This is useful for integrating IOResult-returning functions into ReaderIOResult workflows.
//
@@ -560,7 +597,7 @@ func ChainFirstIOK[A, B any](f func(A) IO[B]) Operator[A, A] {
// Returns a function that chains the IOResult-returning function.
//
//go:inline
func ChainIOEitherK[A, B any](f func(A) IOResult[B]) Operator[A, B] {
func ChainIOEitherK[A, B any](f ioresult.Kleisli[A, B]) Operator[A, B] {
return RIOR.ChainIOEitherK[context.Context](f)
}
@@ -628,7 +665,7 @@ func Defer[A any](gen Lazy[ReaderIOResult[A]]) ReaderIOResult[A] {
//
//go:inline
func TryCatch[A any](f func(context.Context) func() (A, error)) ReaderIOResult[A] {
return RIOR.TryCatch(f, errors.IdentityError)
return RIOR.TryCatch(f, errors.Identity)
}
// MonadAlt provides an alternative [ReaderIOResult] if the first one fails.
@@ -723,7 +760,7 @@ func Flap[B, A any](a A) Operator[func(A) B, B] {
//
//go:inline
func Fold[A, B any](onLeft Kleisli[error, B], onRight Kleisli[A, B]) Operator[A, B] {
return RIOR.Fold(onLeft, onRight)
return RIOR.Fold(function.Flow2(onLeft, WithContext), function.Flow2(onRight, WithContext))
}
// GetOrElse extracts the value from a [ReaderIOResult], providing a default via a function if it fails.
@@ -735,7 +772,7 @@ func Fold[A, B any](onLeft Kleisli[error, B], onRight Kleisli[A, B]) Operator[A,
// Returns a function that converts a ReaderIOResult to a ReaderIO.
//
//go:inline
func GetOrElse[A any](onLeft func(error) ReaderIO[A]) func(ReaderIOResult[A]) ReaderIO[A] {
func GetOrElse[A any](onLeft readerio.Kleisli[error, A]) func(ReaderIOResult[A]) ReaderIO[A] {
return RIOR.GetOrElse(onLeft)
}
@@ -782,11 +819,21 @@ func MonadChainFirstReaderK[A, B any](ma ReaderIOResult[A], f reader.Kleisli[con
return RIOR.MonadChainFirstReaderK(ma, f)
}
//go:inline
func MonadTapReaderK[A, B any](ma ReaderIOResult[A], f reader.Kleisli[context.Context, A, B]) ReaderIOResult[A] {
return RIOR.MonadTapReaderK(ma, f)
}
//go:inline
func ChainFirstReaderK[A, B any](f reader.Kleisli[context.Context, A, B]) Operator[A, A] {
return RIOR.ChainFirstReaderK(f)
}
//go:inline
func TapReaderK[A, B any](f reader.Kleisli[context.Context, A, B]) Operator[A, A] {
return RIOR.TapReaderK(f)
}
//go:inline
func MonadChainReaderResultK[A, B any](ma ReaderIOResult[A], f readerresult.Kleisli[A, B]) ReaderIOResult[B] {
return RIOR.MonadChainReaderResultK(ma, f)
@@ -802,31 +849,51 @@ func MonadChainFirstReaderResultK[A, B any](ma ReaderIOResult[A], f readerresult
return RIOR.MonadChainFirstReaderResultK(ma, f)
}
//go:inline
func MonadTapReaderResultK[A, B any](ma ReaderIOResult[A], f readerresult.Kleisli[A, B]) ReaderIOResult[A] {
return RIOR.MonadTapReaderResultK(ma, f)
}
//go:inline
func ChainFirstReaderResultK[A, B any](f readerresult.Kleisli[A, B]) Operator[A, A] {
return RIOR.ChainFirstReaderResultK(f)
}
//go:inline
func MonadChainReaderIOK[A, B any](ma ReaderIOResult[A], f readerio.Kleisli[context.Context, A, B]) ReaderIOResult[B] {
func TapReaderResultK[A, B any](f readerresult.Kleisli[A, B]) Operator[A, A] {
return RIOR.TapReaderResultK(f)
}
//go:inline
func MonadChainReaderIOK[A, B any](ma ReaderIOResult[A], f readerio.Kleisli[A, B]) ReaderIOResult[B] {
return RIOR.MonadChainReaderIOK(ma, f)
}
//go:inline
func ChainReaderIOK[A, B any](f readerio.Kleisli[context.Context, A, B]) Operator[A, B] {
func ChainReaderIOK[A, B any](f readerio.Kleisli[A, B]) Operator[A, B] {
return RIOR.ChainReaderIOK(f)
}
//go:inline
func MonadChainFirstReaderIOK[A, B any](ma ReaderIOResult[A], f readerio.Kleisli[context.Context, A, B]) ReaderIOResult[A] {
func MonadChainFirstReaderIOK[A, B any](ma ReaderIOResult[A], f readerio.Kleisli[A, B]) ReaderIOResult[A] {
return RIOR.MonadChainFirstReaderIOK(ma, f)
}
//go:inline
func ChainFirstReaderIOK[A, B any](f readerio.Kleisli[context.Context, A, B]) Operator[A, A] {
func MonadTapReaderIOK[A, B any](ma ReaderIOResult[A], f readerio.Kleisli[A, B]) ReaderIOResult[A] {
return RIOR.MonadTapReaderIOK(ma, f)
}
//go:inline
func ChainFirstReaderIOK[A, B any](f readerio.Kleisli[A, B]) Operator[A, A] {
return RIOR.ChainFirstReaderIOK(f)
}
//go:inline
func TapReaderIOK[A, B any](f readerio.Kleisli[A, B]) Operator[A, A] {
return RIOR.TapReaderIOK(f)
}
//go:inline
func ChainReaderOptionK[A, B any](onNone func() error) func(readeroption.Kleisli[context.Context, A, B]) Operator[A, B] {
return RIOR.ChainReaderOptionK[context.Context, A, B](onNone)
@@ -836,3 +903,277 @@ func ChainReaderOptionK[A, B any](onNone func() error) func(readeroption.Kleisli
func ChainFirstReaderOptionK[A, B any](onNone func() error) func(readeroption.Kleisli[context.Context, A, B]) Operator[A, A] {
return RIOR.ChainFirstReaderOptionK[context.Context, A, B](onNone)
}
//go:inline
func TapReaderOptionK[A, B any](onNone func() error) func(readeroption.Kleisli[context.Context, A, B]) Operator[A, A] {
return RIOR.TapReaderOptionK[context.Context, A, B](onNone)
}
//go:inline
func Read[A any](r context.Context) func(ReaderIOResult[A]) IOResult[A] {
return RIOR.Read[A](r)
}
// MonadChainLeft chains a computation on the left (error) side of a [ReaderIOResult].
// If the input is a Left value, it applies the function f to transform the error and potentially
// change the error type. If the input is a Right value, it passes through unchanged.
//
//go:inline
func MonadChainLeft[A any](fa ReaderIOResult[A], f Kleisli[error, A]) ReaderIOResult[A] {
return RIOR.MonadChainLeft(fa, WithContextK(f))
}
// ChainLeft is the curried version of [MonadChainLeft].
// It returns a function that chains a computation on the left (error) side of a [ReaderIOResult].
//
//go:inline
func ChainLeft[A any](f Kleisli[error, A]) Operator[A, A] {
return RIOR.ChainLeft(WithContextK(f))
}
// MonadChainFirstLeft chains a computation on the left (error) side but always returns the original error.
// If the input is a Left value, it applies the function f to the error and executes the resulting computation,
// but always returns the original Left error regardless of what f returns (Left or Right).
// If the input is a Right value, it passes through unchanged without calling f.
//
// This is useful for side effects on errors (like logging or metrics) where you want to perform an action
// when an error occurs but always propagate the original error, ensuring the error path is preserved.
//
//go:inline
func MonadChainFirstLeft[A, B any](ma ReaderIOResult[A], f Kleisli[error, B]) ReaderIOResult[A] {
return RIOR.MonadChainFirstLeft(ma, WithContextK(f))
}
//go:inline
func MonadTapLeft[A, B any](ma ReaderIOResult[A], f Kleisli[error, B]) ReaderIOResult[A] {
return RIOR.MonadTapLeft(ma, WithContextK(f))
}
// ChainFirstLeft is the curried version of [MonadChainFirstLeft].
// It returns a function that chains a computation on the left (error) side while always preserving the original error.
//
// This is particularly useful for adding error handling side effects (like logging, metrics, or notifications)
// in a functional pipeline. The original error is always returned regardless of what f returns (Left or Right),
// ensuring the error path is preserved.
//
//go:inline
func ChainFirstLeft[A, B any](f Kleisli[error, B]) Operator[A, A] {
return RIOR.ChainFirstLeft[A](WithContextK(f))
}
//go:inline
func TapLeft[A, B any](f Kleisli[error, B]) Operator[A, A] {
return RIOR.TapLeft[A](WithContextK(f))
}
//go:inline
func ChainFirstLeftIOK[A, B any](f io.Kleisli[error, B]) Operator[A, A] {
return RIOR.ChainFirstLeftIOK[A, context.Context](f)
}
//go:inline
func TapLeftIOK[A, B any](f io.Kleisli[error, B]) Operator[A, A] {
return RIOR.TapLeftIOK[A, context.Context](f)
}
// Local transforms the context.Context environment before passing it to a ReaderIOResult computation.
//
// This is the Reader's local operation, which allows you to modify the environment
// for a specific computation without affecting the outer context. The transformation
// function receives the current context and returns a new context along with a
// cancel function. The cancel function is automatically called when the computation
// completes (via defer), ensuring proper cleanup of resources.
//
// The function checks for context cancellation before applying the transformation,
// returning an error immediately if the context is already cancelled.
//
// This is useful for:
// - Adding timeouts or deadlines to specific operations
// - Adding context values for nested computations
// - Creating isolated context scopes
// - Implementing context-based dependency injection
//
// Type Parameters:
// - A: The value type of the ReaderIOResult
//
// Parameters:
// - f: A function that transforms the context and returns a cancel function
//
// Returns:
// - An Operator that runs the computation with the transformed context
//
// Example:
//
// import F "github.com/IBM/fp-go/v2/function"
//
// // Add a custom value to the context
// type key int
// const userKey key = 0
//
// addUser := readerioresult.Local[string](func(ctx context.Context) (context.Context, context.CancelFunc) {
// newCtx := context.WithValue(ctx, userKey, "Alice")
// return newCtx, func() {} // No-op cancel
// })
//
// getUser := readerioresult.FromReader(func(ctx context.Context) string {
// if user := ctx.Value(userKey); user != nil {
// return user.(string)
// }
// return "unknown"
// })
//
// result := F.Pipe1(
// getUser,
// addUser,
// )
// value, err := result(context.Background())() // Returns ("Alice", nil)
//
// Timeout Example:
//
// // Add a 5-second timeout to a specific operation
// withTimeout := readerioresult.Local[Data](func(ctx context.Context) (context.Context, context.CancelFunc) {
// return context.WithTimeout(ctx, 5*time.Second)
// })
//
// result := F.Pipe1(
// fetchData,
// withTimeout,
// )
func Local[A any](f func(context.Context) (context.Context, context.CancelFunc)) Operator[A, A] {
return func(rr ReaderIOResult[A]) ReaderIOResult[A] {
return func(ctx context.Context) IOResult[A] {
return func() Result[A] {
if ctx.Err() != nil {
return result.Left[A](context.Cause(ctx))
}
otherCtx, otherCancel := f(ctx)
defer otherCancel()
return rr(otherCtx)()
}
}
}
}
// WithTimeout adds a timeout to the context for a ReaderIOResult computation.
//
// This is a convenience wrapper around Local that uses context.WithTimeout.
// The computation must complete within the specified duration, or it will be
// cancelled. This is useful for ensuring operations don't run indefinitely
// and for implementing timeout-based error handling.
//
// The timeout is relative to when the ReaderIOResult is executed, not when
// WithTimeout is called. The cancel function is automatically called when
// the computation completes, ensuring proper cleanup. If the timeout expires,
// the computation will receive a context.DeadlineExceeded error.
//
// Type Parameters:
// - A: The value type of the ReaderIOResult
//
// Parameters:
// - timeout: The maximum duration for the computation
//
// Returns:
// - An Operator that runs the computation with a timeout
//
// Example:
//
// import (
// "time"
// F "github.com/IBM/fp-go/v2/function"
// )
//
// // Fetch data with a 5-second timeout
// fetchData := readerioresult.FromReader(func(ctx context.Context) Data {
// // Simulate slow operation
// select {
// case <-time.After(10 * time.Second):
// return Data{Value: "slow"}
// case <-ctx.Done():
// return Data{}
// }
// })
//
// result := F.Pipe1(
// fetchData,
// readerioresult.WithTimeout[Data](5*time.Second),
// )
// value, err := result(context.Background())() // Returns (Data{}, context.DeadlineExceeded) after 5s
//
// Successful Example:
//
// quickFetch := readerioresult.Right(Data{Value: "quick"})
// result := F.Pipe1(
// quickFetch,
// readerioresult.WithTimeout[Data](5*time.Second),
// )
// value, err := result(context.Background())() // Returns (Data{Value: "quick"}, nil)
func WithTimeout[A any](timeout time.Duration) Operator[A, A] {
return Local[A](func(ctx context.Context) (context.Context, context.CancelFunc) {
return context.WithTimeout(ctx, timeout)
})
}
// WithDeadline adds an absolute deadline to the context for a ReaderIOResult computation.
//
// This is a convenience wrapper around Local that uses context.WithDeadline.
// The computation must complete before the specified time, or it will be
// cancelled. This is useful for coordinating operations that must finish
// by a specific time, such as request deadlines or scheduled tasks.
//
// The deadline is an absolute time, unlike WithTimeout which uses a relative
// duration. The cancel function is automatically called when the computation
// completes, ensuring proper cleanup. If the deadline passes, the computation
// will receive a context.DeadlineExceeded error.
//
// Type Parameters:
// - A: The value type of the ReaderIOResult
//
// Parameters:
// - deadline: The absolute time by which the computation must complete
//
// Returns:
// - An Operator that runs the computation with a deadline
//
// Example:
//
// import (
// "time"
// F "github.com/IBM/fp-go/v2/function"
// )
//
// // Operation must complete by 3 PM
// deadline := time.Date(2024, 1, 1, 15, 0, 0, 0, time.UTC)
//
// fetchData := readerioresult.FromReader(func(ctx context.Context) Data {
// // Simulate operation
// select {
// case <-time.After(1 * time.Hour):
// return Data{Value: "done"}
// case <-ctx.Done():
// return Data{}
// }
// })
//
// result := F.Pipe1(
// fetchData,
// readerioresult.WithDeadline[Data](deadline),
// )
// value, err := result(context.Background())() // Returns (Data{}, context.DeadlineExceeded) if past deadline
//
// Combining with Parent Context:
//
// // If parent context already has a deadline, the earlier one takes precedence
// parentCtx, cancel := context.WithDeadline(context.Background(), time.Now().Add(1*time.Hour))
// defer cancel()
//
// laterDeadline := time.Now().Add(2 * time.Hour)
// result := F.Pipe1(
// fetchData,
// readerioresult.WithDeadline[Data](laterDeadline),
// )
// value, err := result(parentCtx)() // Will use parent's 1-hour deadline
func WithDeadline[A any](deadline time.Time) Operator[A, A] {
return Local[A](func(ctx context.Context) (context.Context, context.CancelFunc) {
return context.WithDeadline(ctx, deadline)
})
}

View File

@@ -24,6 +24,7 @@ import (
E "github.com/IBM/fp-go/v2/either"
F "github.com/IBM/fp-go/v2/function"
IOE "github.com/IBM/fp-go/v2/ioeither"
N "github.com/IBM/fp-go/v2/number"
)
var (
@@ -37,21 +38,21 @@ var (
// Benchmark core constructors
func BenchmarkLeft(b *testing.B) {
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = Left[int](benchErr)
}
}
func BenchmarkRight(b *testing.B) {
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = Right(42)
}
}
func BenchmarkOf(b *testing.B) {
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = Of(42)
}
}
@@ -60,7 +61,7 @@ func BenchmarkFromEither_Right(b *testing.B) {
either := E.Right[error](42)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = FromEither(either)
}
}
@@ -69,7 +70,7 @@ func BenchmarkFromEither_Left(b *testing.B) {
either := E.Left[int](benchErr)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = FromEither(either)
}
}
@@ -77,7 +78,7 @@ func BenchmarkFromEither_Left(b *testing.B) {
func BenchmarkFromIO(b *testing.B) {
io := func() int { return 42 }
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = FromIO(io)
}
}
@@ -85,7 +86,7 @@ func BenchmarkFromIO(b *testing.B) {
func BenchmarkFromIOEither_Right(b *testing.B) {
ioe := IOE.Of[error](42)
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = FromIOEither(ioe)
}
}
@@ -93,7 +94,7 @@ func BenchmarkFromIOEither_Right(b *testing.B) {
func BenchmarkFromIOEither_Left(b *testing.B) {
ioe := IOE.Left[int](benchErr)
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = FromIOEither(ioe)
}
}
@@ -103,7 +104,7 @@ func BenchmarkExecute_Right(b *testing.B) {
rioe := Right(42)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchResult = rioe(benchCtx)()
}
}
@@ -112,7 +113,7 @@ func BenchmarkExecute_Left(b *testing.B) {
rioe := Left[int](benchErr)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchResult = rioe(benchCtx)()
}
}
@@ -123,7 +124,7 @@ func BenchmarkExecute_WithContext(b *testing.B) {
defer cancel()
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchResult = rioe(ctx)()
}
}
@@ -131,40 +132,40 @@ func BenchmarkExecute_WithContext(b *testing.B) {
// Benchmark functor operations
func BenchmarkMonadMap_Right(b *testing.B) {
rioe := Right(42)
mapper := func(a int) int { return a * 2 }
mapper := N.Mul(2)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = MonadMap(rioe, mapper)
}
}
func BenchmarkMonadMap_Left(b *testing.B) {
rioe := Left[int](benchErr)
mapper := func(a int) int { return a * 2 }
mapper := N.Mul(2)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = MonadMap(rioe, mapper)
}
}
func BenchmarkMap_Right(b *testing.B) {
rioe := Right(42)
mapper := Map(func(a int) int { return a * 2 })
mapper := Map(N.Mul(2))
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = mapper(rioe)
}
}
func BenchmarkMap_Left(b *testing.B) {
rioe := Left[int](benchErr)
mapper := Map(func(a int) int { return a * 2 })
mapper := Map(N.Mul(2))
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = mapper(rioe)
}
}
@@ -174,7 +175,7 @@ func BenchmarkMapTo_Right(b *testing.B) {
mapper := MapTo[int](99)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = mapper(rioe)
}
}
@@ -185,7 +186,7 @@ func BenchmarkMonadChain_Right(b *testing.B) {
chainer := func(a int) ReaderIOResult[int] { return Right(a * 2) }
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = MonadChain(rioe, chainer)
}
}
@@ -195,7 +196,7 @@ func BenchmarkMonadChain_Left(b *testing.B) {
chainer := func(a int) ReaderIOResult[int] { return Right(a * 2) }
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = MonadChain(rioe, chainer)
}
}
@@ -205,7 +206,7 @@ func BenchmarkChain_Right(b *testing.B) {
chainer := Chain(func(a int) ReaderIOResult[int] { return Right(a * 2) })
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = chainer(rioe)
}
}
@@ -215,7 +216,7 @@ func BenchmarkChain_Left(b *testing.B) {
chainer := Chain(func(a int) ReaderIOResult[int] { return Right(a * 2) })
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = chainer(rioe)
}
}
@@ -225,7 +226,7 @@ func BenchmarkChainFirst_Right(b *testing.B) {
chainer := ChainFirst(func(a int) ReaderIOResult[string] { return Right("logged") })
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = chainer(rioe)
}
}
@@ -235,7 +236,7 @@ func BenchmarkChainFirst_Left(b *testing.B) {
chainer := ChainFirst(func(a int) ReaderIOResult[string] { return Right("logged") })
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = chainer(rioe)
}
}
@@ -244,7 +245,7 @@ func BenchmarkFlatten_Right(b *testing.B) {
nested := Right(Right(42))
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = Flatten(nested)
}
}
@@ -253,28 +254,28 @@ func BenchmarkFlatten_Left(b *testing.B) {
nested := Left[ReaderIOResult[int]](benchErr)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = Flatten(nested)
}
}
// Benchmark applicative operations
func BenchmarkMonadApSeq_RightRight(b *testing.B) {
fab := Right(func(a int) int { return a * 2 })
fab := Right(N.Mul(2))
fa := Right(42)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = MonadApSeq(fab, fa)
}
}
func BenchmarkMonadApSeq_RightLeft(b *testing.B) {
fab := Right(func(a int) int { return a * 2 })
fab := Right(N.Mul(2))
fa := Left[int](benchErr)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = MonadApSeq(fab, fa)
}
}
@@ -284,27 +285,27 @@ func BenchmarkMonadApSeq_LeftRight(b *testing.B) {
fa := Right(42)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = MonadApSeq(fab, fa)
}
}
func BenchmarkMonadApPar_RightRight(b *testing.B) {
fab := Right(func(a int) int { return a * 2 })
fab := Right(N.Mul(2))
fa := Right(42)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = MonadApPar(fab, fa)
}
}
func BenchmarkMonadApPar_RightLeft(b *testing.B) {
fab := Right(func(a int) int { return a * 2 })
fab := Right(N.Mul(2))
fa := Left[int](benchErr)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = MonadApPar(fab, fa)
}
}
@@ -314,30 +315,30 @@ func BenchmarkMonadApPar_LeftRight(b *testing.B) {
fa := Right(42)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = MonadApPar(fab, fa)
}
}
// Benchmark execution of applicative operations
func BenchmarkExecuteApSeq_RightRight(b *testing.B) {
fab := Right(func(a int) int { return a * 2 })
fab := Right(N.Mul(2))
fa := Right(42)
rioe := MonadApSeq(fab, fa)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchResult = rioe(benchCtx)()
}
}
func BenchmarkExecuteApPar_RightRight(b *testing.B) {
fab := Right(func(a int) int { return a * 2 })
fab := Right(N.Mul(2))
fa := Right(42)
rioe := MonadApPar(fab, fa)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchResult = rioe(benchCtx)()
}
}
@@ -348,7 +349,7 @@ func BenchmarkAlt_RightRight(b *testing.B) {
alternative := Alt(func() ReaderIOResult[int] { return Right(99) })
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = alternative(rioe)
}
}
@@ -358,7 +359,7 @@ func BenchmarkAlt_LeftRight(b *testing.B) {
alternative := Alt(func() ReaderIOResult[int] { return Right(99) })
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = alternative(rioe)
}
}
@@ -368,7 +369,7 @@ func BenchmarkOrElse_Right(b *testing.B) {
recover := OrElse(func(e error) ReaderIOResult[int] { return Right(0) })
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = recover(rioe)
}
}
@@ -378,7 +379,7 @@ func BenchmarkOrElse_Left(b *testing.B) {
recover := OrElse(func(e error) ReaderIOResult[int] { return Right(0) })
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = recover(rioe)
}
}
@@ -389,7 +390,7 @@ func BenchmarkChainEitherK_Right(b *testing.B) {
chainer := ChainEitherK(func(a int) Either[int] { return E.Right[error](a * 2) })
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = chainer(rioe)
}
}
@@ -399,7 +400,7 @@ func BenchmarkChainEitherK_Left(b *testing.B) {
chainer := ChainEitherK(func(a int) Either[int] { return E.Right[error](a * 2) })
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = chainer(rioe)
}
}
@@ -409,7 +410,7 @@ func BenchmarkChainIOK_Right(b *testing.B) {
chainer := ChainIOK(func(a int) func() int { return func() int { return a * 2 } })
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = chainer(rioe)
}
}
@@ -419,7 +420,7 @@ func BenchmarkChainIOK_Left(b *testing.B) {
chainer := ChainIOK(func(a int) func() int { return func() int { return a * 2 } })
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = chainer(rioe)
}
}
@@ -429,7 +430,7 @@ func BenchmarkChainIOEitherK_Right(b *testing.B) {
chainer := ChainIOEitherK(func(a int) IOEither[int] { return IOE.Of[error](a * 2) })
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = chainer(rioe)
}
}
@@ -439,7 +440,7 @@ func BenchmarkChainIOEitherK_Left(b *testing.B) {
chainer := ChainIOEitherK(func(a int) IOEither[int] { return IOE.Of[error](a * 2) })
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = chainer(rioe)
}
}
@@ -447,7 +448,7 @@ func BenchmarkChainIOEitherK_Left(b *testing.B) {
// Benchmark context operations
func BenchmarkAsk(b *testing.B) {
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
_ = Ask()
}
}
@@ -455,7 +456,7 @@ func BenchmarkAsk(b *testing.B) {
func BenchmarkDefer(b *testing.B) {
gen := func() ReaderIOResult[int] { return Right(42) }
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = Defer(gen)
}
}
@@ -463,7 +464,7 @@ func BenchmarkDefer(b *testing.B) {
func BenchmarkMemoize(b *testing.B) {
rioe := Right(42)
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = Memoize(rioe)
}
}
@@ -472,14 +473,14 @@ func BenchmarkMemoize(b *testing.B) {
func BenchmarkDelay_Construction(b *testing.B) {
rioe := Right(42)
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = Delay[int](time.Millisecond)(rioe)
}
}
func BenchmarkTimer_Construction(b *testing.B) {
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
_ = Timer(time.Millisecond)
}
}
@@ -490,7 +491,7 @@ func BenchmarkTryCatch_Success(b *testing.B) {
return func() (int, error) { return 42, nil }
}
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = TryCatch(f)
}
}
@@ -500,7 +501,7 @@ func BenchmarkTryCatch_Error(b *testing.B) {
return func() (int, error) { return 0, benchErr }
}
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = TryCatch(f)
}
}
@@ -512,7 +513,7 @@ func BenchmarkExecuteTryCatch_Success(b *testing.B) {
rioe := TryCatch(f)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchResult = rioe(benchCtx)()
}
}
@@ -524,7 +525,7 @@ func BenchmarkExecuteTryCatch_Error(b *testing.B) {
rioe := TryCatch(f)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchResult = rioe(benchCtx)()
}
}
@@ -534,10 +535,10 @@ func BenchmarkPipeline_Map_Right(b *testing.B) {
rioe := Right(21)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = F.Pipe1(
rioe,
Map(func(x int) int { return x * 2 }),
Map(N.Mul(2)),
)
}
}
@@ -546,10 +547,10 @@ func BenchmarkPipeline_Map_Left(b *testing.B) {
rioe := Left[int](benchErr)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = F.Pipe1(
rioe,
Map(func(x int) int { return x * 2 }),
Map(N.Mul(2)),
)
}
}
@@ -558,7 +559,7 @@ func BenchmarkPipeline_Chain_Right(b *testing.B) {
rioe := Right(21)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = F.Pipe1(
rioe,
Chain(func(x int) ReaderIOResult[int] { return Right(x * 2) }),
@@ -570,7 +571,7 @@ func BenchmarkPipeline_Chain_Left(b *testing.B) {
rioe := Left[int](benchErr)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = F.Pipe1(
rioe,
Chain(func(x int) ReaderIOResult[int] { return Right(x * 2) }),
@@ -582,12 +583,12 @@ func BenchmarkPipeline_Complex_Right(b *testing.B) {
rioe := Right(10)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = F.Pipe3(
rioe,
Map(func(x int) int { return x * 2 }),
Map(N.Mul(2)),
Chain(func(x int) ReaderIOResult[int] { return Right(x + 1) }),
Map(func(x int) int { return x * 2 }),
Map(N.Mul(2)),
)
}
}
@@ -596,12 +597,12 @@ func BenchmarkPipeline_Complex_Left(b *testing.B) {
rioe := Left[int](benchErr)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchRIOE = F.Pipe3(
rioe,
Map(func(x int) int { return x * 2 }),
Map(N.Mul(2)),
Chain(func(x int) ReaderIOResult[int] { return Right(x + 1) }),
Map(func(x int) int { return x * 2 }),
Map(N.Mul(2)),
)
}
}
@@ -609,13 +610,13 @@ func BenchmarkPipeline_Complex_Left(b *testing.B) {
func BenchmarkExecutePipeline_Complex_Right(b *testing.B) {
rioe := F.Pipe3(
Right(10),
Map(func(x int) int { return x * 2 }),
Map(N.Mul(2)),
Chain(func(x int) ReaderIOResult[int] { return Right(x + 1) }),
Map(func(x int) int { return x * 2 }),
Map(N.Mul(2)),
)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchResult = rioe(benchCtx)()
}
}
@@ -624,7 +625,7 @@ func BenchmarkExecutePipeline_Complex_Right(b *testing.B) {
func BenchmarkDo(b *testing.B) {
type State struct{ value int }
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
_ = Do(State{})
}
}
@@ -642,7 +643,7 @@ func BenchmarkBind_Right(b *testing.B) {
)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
_ = binder(initial)
}
}
@@ -658,7 +659,7 @@ func BenchmarkLet_Right(b *testing.B) {
)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
_ = letter(initial)
}
}
@@ -674,7 +675,7 @@ func BenchmarkApS_Right(b *testing.B) {
)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
_ = aps(initial)
}
}
@@ -687,7 +688,7 @@ func BenchmarkTraverseArray_Empty(b *testing.B) {
})
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
_ = traverser(arr)
}
}
@@ -699,7 +700,7 @@ func BenchmarkTraverseArray_Small(b *testing.B) {
})
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
_ = traverser(arr)
}
}
@@ -714,7 +715,7 @@ func BenchmarkTraverseArray_Medium(b *testing.B) {
})
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
_ = traverser(arr)
}
}
@@ -726,7 +727,7 @@ func BenchmarkTraverseArraySeq_Small(b *testing.B) {
})
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
_ = traverser(arr)
}
}
@@ -738,7 +739,7 @@ func BenchmarkTraverseArrayPar_Small(b *testing.B) {
})
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
_ = traverser(arr)
}
}
@@ -751,7 +752,7 @@ func BenchmarkSequenceArray_Small(b *testing.B) {
}
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
_ = SequenceArray(arr)
}
}
@@ -763,7 +764,7 @@ func BenchmarkExecuteTraverseArray_Small(b *testing.B) {
})(arr)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
_ = rioe(benchCtx)()
}
}
@@ -775,7 +776,7 @@ func BenchmarkExecuteTraverseArraySeq_Small(b *testing.B) {
})(arr)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
_ = rioe(benchCtx)()
}
}
@@ -787,7 +788,7 @@ func BenchmarkExecuteTraverseArrayPar_Small(b *testing.B) {
})(arr)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
_ = rioe(benchCtx)()
}
}
@@ -800,7 +801,7 @@ func BenchmarkTraverseRecord_Small(b *testing.B) {
})
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
_ = traverser(rec)
}
}
@@ -813,7 +814,7 @@ func BenchmarkSequenceRecord_Small(b *testing.B) {
}
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
_ = SequenceRecord(rec)
}
}
@@ -826,7 +827,7 @@ func BenchmarkWithResource_Success(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
_ = WithResource[int](acquire, release)(body)
}
}
@@ -839,7 +840,7 @@ func BenchmarkExecuteWithResource_Success(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchResult = rioe(benchCtx)()
}
}
@@ -852,7 +853,7 @@ func BenchmarkExecuteWithResource_ErrorInBody(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchResult = rioe(benchCtx)()
}
}
@@ -865,13 +866,13 @@ func BenchmarkExecute_CanceledContext(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchResult = rioe(ctx)()
}
}
func BenchmarkExecuteApPar_CanceledContext(b *testing.B) {
fab := Right(func(a int) int { return a * 2 })
fab := Right(N.Mul(2))
fa := Right(42)
rioe := MonadApPar(fab, fa)
ctx, cancel := context.WithCancel(benchCtx)
@@ -879,9 +880,7 @@ func BenchmarkExecuteApPar_CanceledContext(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
benchResult = rioe(ctx)()
}
}
// Made with Bob

View File

@@ -26,6 +26,7 @@ import (
IOG "github.com/IBM/fp-go/v2/io"
IOE "github.com/IBM/fp-go/v2/ioeither"
M "github.com/IBM/fp-go/v2/monoid"
N "github.com/IBM/fp-go/v2/number"
O "github.com/IBM/fp-go/v2/option"
R "github.com/IBM/fp-go/v2/reader"
"github.com/stretchr/testify/assert"
@@ -77,27 +78,27 @@ func TestOf(t *testing.T) {
func TestMonadMap(t *testing.T) {
t.Run("Map over Right", func(t *testing.T) {
result := MonadMap(Of(5), func(x int) int { return x * 2 })
result := MonadMap(Of(5), N.Mul(2))
assert.Equal(t, E.Right[error](10), result(context.Background())())
})
t.Run("Map over Left", func(t *testing.T) {
err := errors.New("test error")
result := MonadMap(Left[int](err), func(x int) int { return x * 2 })
result := MonadMap(Left[int](err), N.Mul(2))
assert.Equal(t, E.Left[int](err), result(context.Background())())
})
}
func TestMap(t *testing.T) {
t.Run("Map with success", func(t *testing.T) {
mapper := Map(func(x int) int { return x * 2 })
mapper := Map(N.Mul(2))
result := mapper(Of(5))
assert.Equal(t, E.Right[error](10), result(context.Background())())
})
t.Run("Map with error", func(t *testing.T) {
err := errors.New("test error")
mapper := Map(func(x int) int { return x * 2 })
mapper := Map(N.Mul(2))
result := mapper(Left[int](err))
assert.Equal(t, E.Left[int](err), result(context.Background())())
})
@@ -182,7 +183,7 @@ func TestChainFirst(t *testing.T) {
func TestMonadApSeq(t *testing.T) {
t.Run("ApSeq with success", func(t *testing.T) {
fab := Of(func(x int) int { return x * 2 })
fab := Of(N.Mul(2))
fa := Of(5)
result := MonadApSeq(fab, fa)
assert.Equal(t, E.Right[error](10), result(context.Background())())
@@ -198,7 +199,7 @@ func TestMonadApSeq(t *testing.T) {
t.Run("ApSeq with error in value", func(t *testing.T) {
err := errors.New("test error")
fab := Of(func(x int) int { return x * 2 })
fab := Of(N.Mul(2))
fa := Left[int](err)
result := MonadApSeq(fab, fa)
assert.Equal(t, E.Left[int](err), result(context.Background())())
@@ -207,7 +208,7 @@ func TestMonadApSeq(t *testing.T) {
func TestApSeq(t *testing.T) {
fa := Of(5)
fab := Of(func(x int) int { return x * 2 })
fab := Of(N.Mul(2))
result := MonadApSeq(fab, fa)
assert.Equal(t, E.Right[error](10), result(context.Background())())
}
@@ -215,7 +216,7 @@ func TestApSeq(t *testing.T) {
func TestApPar(t *testing.T) {
t.Run("ApPar with success", func(t *testing.T) {
fa := Of(5)
fab := Of(func(x int) int { return x * 2 })
fab := Of(N.Mul(2))
result := MonadApPar(fab, fa)
assert.Equal(t, E.Right[error](10), result(context.Background())())
})
@@ -224,7 +225,7 @@ func TestApPar(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
cancel()
fa := Of(5)
fab := Of(func(x int) int { return x * 2 })
fab := Of(N.Mul(2))
result := MonadApPar(fab, fa)
res := result(ctx)()
assert.True(t, E.IsLeft(res))
@@ -234,7 +235,7 @@ func TestApPar(t *testing.T) {
func TestFromPredicate(t *testing.T) {
t.Run("Predicate true", func(t *testing.T) {
pred := FromPredicate(
func(x int) bool { return x > 0 },
N.MoreThan(0),
func(x int) error { return fmt.Errorf("value %d is not positive", x) },
)
result := pred(5)
@@ -243,7 +244,7 @@ func TestFromPredicate(t *testing.T) {
t.Run("Predicate false", func(t *testing.T) {
pred := FromPredicate(
func(x int) bool { return x > 0 },
N.MoreThan(0),
func(x int) error { return fmt.Errorf("value %d is not positive", x) },
)
result := pred(-5)
@@ -566,15 +567,13 @@ func TestMemoize(t *testing.T) {
res1 := computation(context.Background())()
assert.True(t, E.IsRight(res1))
val1 := E.ToOption(res1)
v1, _ := O.Unwrap(val1)
assert.Equal(t, 1, v1)
assert.Equal(t, O.Of(1), val1)
// Second execution should return cached value
res2 := computation(context.Background())()
assert.True(t, E.IsRight(res2))
val2 := E.ToOption(res2)
v2, _ := O.Unwrap(val2)
assert.Equal(t, 1, v2)
assert.Equal(t, O.Of(1), val2)
// Counter should only be incremented once
assert.Equal(t, 1, counter)
@@ -587,14 +586,14 @@ func TestFlatten(t *testing.T) {
}
func TestMonadFlap(t *testing.T) {
fab := Of(func(x int) int { return x * 2 })
fab := Of(N.Mul(2))
result := MonadFlap(fab, 5)
assert.Equal(t, E.Right[error](10), result(context.Background())())
}
func TestFlap(t *testing.T) {
flapper := Flap[int](5)
result := flapper(Of(func(x int) int { return x * 2 }))
result := flapper(Of(N.Mul(2)))
assert.Equal(t, E.Right[error](10), result(context.Background())())
}
@@ -738,9 +737,7 @@ func TestTraverseArray(t *testing.T) {
res := result(context.Background())()
assert.True(t, E.IsRight(res))
arrOpt := E.ToOption(res)
assert.True(t, O.IsSome(arrOpt))
resultArr, _ := O.Unwrap(arrOpt)
assert.Equal(t, []int{2, 4, 6}, resultArr)
assert.Equal(t, O.Of([]int{2, 4, 6}), arrOpt)
})
t.Run("TraverseArray with error", func(t *testing.T) {
@@ -764,9 +761,7 @@ func TestSequenceArray(t *testing.T) {
res := result(context.Background())()
assert.True(t, E.IsRight(res))
arrOpt := E.ToOption(res)
assert.True(t, O.IsSome(arrOpt))
resultArr, _ := O.Unwrap(arrOpt)
assert.Equal(t, []int{1, 2, 3}, resultArr)
assert.Equal(t, O.Of([]int{1, 2, 3}), arrOpt)
}
func TestTraverseRecord(t *testing.T) {
@@ -875,5 +870,3 @@ func TestBracket(t *testing.T) {
assert.Equal(t, E.Left[int](err), res)
})
}
// Made with Bob

View File

@@ -284,3 +284,160 @@ func TestWithResourceErrorInRelease(t *testing.T) {
assert.Equal(t, 0, countRelease)
assert.Equal(t, E.Left[int](err), res)
}
func TestMonadChainFirstLeft(t *testing.T) {
ctx := context.Background()
// Test with Left value - function returns Left, always preserves original error
t.Run("Left value with function returning Left preserves original error", func(t *testing.T) {
sideEffectCalled := false
originalErr := fmt.Errorf("original error")
result := MonadChainFirstLeft(
Left[int](originalErr),
func(e error) ReaderIOResult[int] {
sideEffectCalled = true
return Left[int](fmt.Errorf("new error")) // This error is ignored
},
)
actualResult := result(ctx)()
assert.True(t, sideEffectCalled)
assert.Equal(t, E.Left[int](originalErr), actualResult)
})
// Test with Left value - function returns Right, still returns original Left
t.Run("Left value with function returning Right still returns original Left", func(t *testing.T) {
var capturedError error
originalErr := fmt.Errorf("validation failed")
result := MonadChainFirstLeft(
Left[int](originalErr),
func(e error) ReaderIOResult[int] {
capturedError = e
return Right(999) // This Right value is ignored
},
)
actualResult := result(ctx)()
assert.Equal(t, originalErr, capturedError)
assert.Equal(t, E.Left[int](originalErr), actualResult)
})
// Test with Right value - should pass through without calling function
t.Run("Right value passes through", func(t *testing.T) {
sideEffectCalled := false
result := MonadChainFirstLeft(
Right(42),
func(e error) ReaderIOResult[int] {
sideEffectCalled = true
return Left[int](fmt.Errorf("should not be called"))
},
)
assert.False(t, sideEffectCalled)
assert.Equal(t, E.Right[error](42), result(ctx)())
})
// Test that side effects are executed but original error is always preserved
t.Run("Side effects executed but original error preserved", func(t *testing.T) {
effectCount := 0
originalErr := fmt.Errorf("original error")
result := MonadChainFirstLeft(
Left[int](originalErr),
func(e error) ReaderIOResult[int] {
effectCount++
// Try to return Right, but original Left should still be returned
return Right(999)
},
)
actualResult := result(ctx)()
assert.Equal(t, 1, effectCount)
assert.Equal(t, E.Left[int](originalErr), actualResult)
})
}
func TestChainFirstLeft(t *testing.T) {
ctx := context.Background()
// Test with Left value - function returns Left, always preserves original error
t.Run("Left value with function returning Left preserves error", func(t *testing.T) {
var captured error
originalErr := fmt.Errorf("test error")
chainFn := ChainFirstLeft[int](func(e error) ReaderIOResult[int] {
captured = e
return Left[int](fmt.Errorf("ignored error"))
})
result := F.Pipe1(
Left[int](originalErr),
chainFn,
)
actualResult := result(ctx)()
assert.Equal(t, originalErr, captured)
assert.Equal(t, E.Left[int](originalErr), actualResult)
})
// Test with Left value - function returns Right, still returns original Left
t.Run("Left value with function returning Right still returns original Left", func(t *testing.T) {
var captured error
originalErr := fmt.Errorf("test error")
chainFn := ChainFirstLeft[int](func(e error) ReaderIOResult[int] {
captured = e
return Right(42) // This Right is ignored
})
result := F.Pipe1(
Left[int](originalErr),
chainFn,
)
actualResult := result(ctx)()
assert.Equal(t, originalErr, captured)
assert.Equal(t, E.Left[int](originalErr), actualResult)
})
// Test with Right value - should pass through without calling function
t.Run("Right value passes through", func(t *testing.T) {
called := false
chainFn := ChainFirstLeft[int](func(e error) ReaderIOResult[int] {
called = true
return Right(0)
})
result := F.Pipe1(
Right(100),
chainFn,
)
assert.False(t, called)
assert.Equal(t, E.Right[error](100), result(ctx)())
})
// Test that original error is always preserved regardless of what f returns
t.Run("Original error always preserved", func(t *testing.T) {
originalErr := fmt.Errorf("original")
chainFn := ChainFirstLeft[int](func(e error) ReaderIOResult[int] {
// Try to return Right, but original Left should still be returned
return Right(999)
})
result := F.Pipe1(
Left[int](originalErr),
chainFn,
)
assert.Equal(t, E.Left[int](originalErr), result(ctx)())
})
// Test logging with Left preservation
t.Run("Logging with Left preservation", func(t *testing.T) {
errorLog := []string{}
originalErr := fmt.Errorf("step1")
logError := ChainFirstLeft[string](func(e error) ReaderIOResult[string] {
errorLog = append(errorLog, "Logged: "+e.Error())
return Left[string](fmt.Errorf("log entry")) // This is ignored
})
result := F.Pipe2(
Left[string](originalErr),
logError,
ChainLeft(func(e error) ReaderIOResult[string] {
return Left[string](fmt.Errorf("step2"))
}),
)
actualResult := result(ctx)()
assert.Equal(t, []string{"Logged: step1"}, errorLog)
assert.Equal(t, E.Left[string](fmt.Errorf("step2")), actualResult)
})
}

View File

@@ -0,0 +1,183 @@
// Copyright (c) 2023 - 2025 IBM Corp.
// All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package readerioresult
import (
F "github.com/IBM/fp-go/v2/function"
RIOR "github.com/IBM/fp-go/v2/readerioresult"
)
// TailRec implements stack-safe tail recursion for the context-aware ReaderIOResult monad.
//
// This function enables recursive computations that combine four powerful concepts:
// - Context awareness: Automatic cancellation checking via [context.Context]
// - Environment dependency (Reader aspect): Access to configuration, context, or dependencies
// - Side effects (IO aspect): Logging, file I/O, network calls, etc.
// - Error handling (Either aspect): Computations that can fail with an error
//
// The function uses an iterative loop to execute the recursion, making it safe for deep
// or unbounded recursion without risking stack overflow. Additionally, it integrates
// context cancellation checking through [WithContext], ensuring that recursive computations
// can be cancelled gracefully.
//
// # How It Works
//
// TailRec takes a Kleisli arrow that returns Trampoline[A, B]:
// - Bounce(A): Continue recursion with the new state A
// - Land(B): Terminate recursion successfully and return the final result B
//
// The function wraps each iteration with [WithContext] to ensure context cancellation
// is checked before each recursive step. If the context is cancelled, the recursion
// terminates early with a context cancellation error.
//
// # Type Parameters
//
// - A: The state type that changes during recursion
// - B: The final result type when recursion terminates successfully
//
// # Parameters
//
// - f: A Kleisli arrow (A => ReaderIOResult[Trampoline[A, B]]) that:
// - Takes the current state A
// - Returns a ReaderIOResult that depends on [context.Context]
// - Can fail with error (Left in the outer Either)
// - Produces Trampoline[A, B] to control recursion flow (Right in the outer Either)
//
// # Returns
//
// A Kleisli arrow (A => ReaderIOResult[B]) that:
// - Takes an initial state A
// - Returns a ReaderIOResult that requires [context.Context]
// - Can fail with error or context cancellation
// - Produces the final result B after recursion completes
//
// # Context Cancellation
//
// Unlike the base [readerioresult.TailRec], this version automatically integrates
// context cancellation checking:
// - Each recursive iteration checks if the context is cancelled
// - If cancelled, recursion terminates immediately with a cancellation error
// - This prevents runaway recursive computations in cancelled contexts
// - Enables responsive cancellation for long-running recursive operations
//
// # Use Cases
//
// 1. Cancellable recursive algorithms:
// - Tree traversals that can be cancelled mid-operation
// - Graph algorithms with timeout requirements
// - Recursive parsers that respect cancellation
//
// 2. Long-running recursive computations:
// - File system traversals with cancellation support
// - Network operations with timeout handling
// - Database operations with connection timeout awareness
//
// 3. Interactive recursive operations:
// - User-initiated operations that can be cancelled
// - Background tasks with cancellation support
// - Streaming operations with graceful shutdown
//
// # Example: Cancellable Countdown
//
// countdownStep := func(n int) readerioresult.ReaderIOResult[tailrec.Trampoline[int, string]] {
// return func(ctx context.Context) ioeither.IOEither[error, tailrec.Trampoline[int, string]] {
// return func() either.Either[error, tailrec.Trampoline[int, string]] {
// if n <= 0 {
// return either.Right[error](tailrec.Land[int]("Done!"))
// }
// // Simulate some work
// time.Sleep(100 * time.Millisecond)
// return either.Right[error](tailrec.Bounce[string](n - 1))
// }
// }
// }
//
// countdown := readerioresult.TailRec(countdownStep)
//
// // With cancellation
// ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond)
// defer cancel()
// result := countdown(10)(ctx)() // Will be cancelled after ~500ms
//
// # Example: Cancellable File Processing
//
// type ProcessState struct {
// files []string
// processed []string
// }
//
// processStep := func(state ProcessState) readerioresult.ReaderIOResult[tailrec.Trampoline[ProcessState, []string]] {
// return func(ctx context.Context) ioeither.IOEither[error, tailrec.Trampoline[ProcessState, []string]] {
// return func() either.Either[error, tailrec.Trampoline[ProcessState, []string]] {
// if len(state.files) == 0 {
// return either.Right[error](tailrec.Land[ProcessState](state.processed))
// }
//
// file := state.files[0]
// // Process file (this could be cancelled via context)
// if err := processFileWithContext(ctx, file); err != nil {
// return either.Left[tailrec.Trampoline[ProcessState, []string]](err)
// }
//
// return either.Right[error](tailrec.Bounce[[]string](ProcessState{
// files: state.files[1:],
// processed: append(state.processed, file),
// }))
// }
// }
// }
//
// processFiles := readerioresult.TailRec(processStep)
// ctx, cancel := context.WithCancel(context.Background())
//
// // Can be cancelled at any point during processing
// go func() {
// time.Sleep(2 * time.Second)
// cancel() // Cancel after 2 seconds
// }()
//
// result := processFiles(ProcessState{files: manyFiles})(ctx)()
//
// # Stack Safety
//
// The iterative implementation ensures that even deeply recursive computations
// (thousands or millions of iterations) will not cause stack overflow, while
// still respecting context cancellation:
//
// // Safe for very large inputs with cancellation support
// largeCountdown := readerioresult.TailRec(countdownStep)
// ctx := context.Background()
// result := largeCountdown(1000000)(ctx)() // Safe, no stack overflow
//
// # Performance Considerations
//
// - Each iteration includes context cancellation checking overhead
// - Context checking happens before each recursive step
// - For performance-critical code, consider the cancellation checking cost
// - The [WithContext] wrapper adds minimal overhead for cancellation safety
//
// # See Also
//
// - [readerioresult.TailRec]: Base tail recursion without automatic context checking
// - [WithContext]: Context cancellation wrapper used internally
// - [Chain]: For sequencing ReaderIOResult computations
// - [Ask]: For accessing the context
// - [Left]/[Right]: For creating error/success values
//
//go:inline
func TailRec[A, B any](f Kleisli[A, Trampoline[A, B]]) Kleisli[A, B] {
return RIOR.TailRec(F.Flow2(f, WithContext))
}

View File

@@ -0,0 +1,434 @@
// Copyright (c) 2023 - 2025 IBM Corp.
// All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package readerioresult
import (
"context"
"errors"
"fmt"
"sync/atomic"
"testing"
"time"
A "github.com/IBM/fp-go/v2/array"
E "github.com/IBM/fp-go/v2/either"
"github.com/IBM/fp-go/v2/tailrec"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestTailRec_BasicRecursion(t *testing.T) {
// Test basic countdown recursion
countdownStep := func(n int) ReaderIOResult[Trampoline[int, string]] {
return func(ctx context.Context) IOEither[Trampoline[int, string]] {
return func() Either[Trampoline[int, string]] {
if n <= 0 {
return E.Right[error](tailrec.Land[int]("Done!"))
}
return E.Right[error](tailrec.Bounce[string](n - 1))
}
}
}
countdown := TailRec(countdownStep)
result := countdown(5)(context.Background())()
assert.Equal(t, E.Of[error]("Done!"), result)
}
func TestTailRec_FactorialRecursion(t *testing.T) {
// Test factorial computation using tail recursion
type FactorialState struct {
n int
acc int
}
factorialStep := func(state FactorialState) ReaderIOResult[Trampoline[FactorialState, int]] {
return func(ctx context.Context) IOEither[Trampoline[FactorialState, int]] {
return func() Either[Trampoline[FactorialState, int]] {
if state.n <= 1 {
return E.Right[error](tailrec.Land[FactorialState](state.acc))
}
return E.Right[error](tailrec.Bounce[int](FactorialState{
n: state.n - 1,
acc: state.acc * state.n,
}))
}
}
}
factorial := TailRec(factorialStep)
result := factorial(FactorialState{n: 5, acc: 1})(context.Background())()
assert.Equal(t, E.Of[error](120), result) // 5! = 120
}
func TestTailRec_ErrorHandling(t *testing.T) {
// Test that errors are properly propagated
testErr := errors.New("computation error")
errorStep := func(n int) ReaderIOResult[Trampoline[int, string]] {
return func(ctx context.Context) IOEither[Trampoline[int, string]] {
return func() Either[Trampoline[int, string]] {
if n == 3 {
return E.Left[Trampoline[int, string]](testErr)
}
if n <= 0 {
return E.Right[error](tailrec.Land[int]("Done!"))
}
return E.Right[error](tailrec.Bounce[string](n - 1))
}
}
}
errorRecursion := TailRec(errorStep)
result := errorRecursion(5)(context.Background())()
assert.True(t, E.IsLeft(result))
err := E.ToError(result)
assert.Equal(t, testErr, err)
}
func TestTailRec_ContextCancellation(t *testing.T) {
// Test that recursion gets cancelled early when context is canceled
var iterationCount int32
slowStep := func(n int) ReaderIOResult[Trampoline[int, string]] {
return func(ctx context.Context) IOEither[Trampoline[int, string]] {
return func() Either[Trampoline[int, string]] {
atomic.AddInt32(&iterationCount, 1)
// Simulate some work
time.Sleep(50 * time.Millisecond)
if n <= 0 {
return E.Right[error](tailrec.Land[int]("Done!"))
}
return E.Right[error](tailrec.Bounce[string](n - 1))
}
}
}
slowRecursion := TailRec(slowStep)
// Create a context that will be cancelled after 100ms
ctx, cancel := context.WithTimeout(context.Background(), 100*time.Millisecond)
defer cancel()
start := time.Now()
result := slowRecursion(10)(ctx)()
elapsed := time.Since(start)
// Should be cancelled and return an error
assert.True(t, E.IsLeft(result))
// Should complete quickly due to cancellation (much less than 10 * 50ms = 500ms)
assert.Less(t, elapsed, 200*time.Millisecond)
// Should have executed only a few iterations before cancellation
iterations := atomic.LoadInt32(&iterationCount)
assert.Less(t, iterations, int32(5), "Should have been cancelled before completing all iterations")
}
func TestTailRec_ImmediateCancellation(t *testing.T) {
// Test with an already cancelled context
countdownStep := func(n int) ReaderIOResult[Trampoline[int, string]] {
return func(ctx context.Context) IOEither[Trampoline[int, string]] {
return func() Either[Trampoline[int, string]] {
if n <= 0 {
return E.Right[error](tailrec.Land[int]("Done!"))
}
return E.Right[error](tailrec.Bounce[string](n - 1))
}
}
}
countdown := TailRec(countdownStep)
// Create an already cancelled context
ctx, cancel := context.WithCancel(context.Background())
cancel()
result := countdown(5)(ctx)()
// Should immediately return a cancellation error
assert.True(t, E.IsLeft(result))
err := E.ToError(result)
assert.Equal(t, context.Canceled, err)
}
func TestTailRec_StackSafety(t *testing.T) {
// Test that deep recursion doesn't cause stack overflow
const largeN = 10000
countdownStep := func(n int) ReaderIOResult[Trampoline[int, int]] {
return func(ctx context.Context) IOEither[Trampoline[int, int]] {
return func() Either[Trampoline[int, int]] {
if n <= 0 {
return E.Right[error](tailrec.Land[int](0))
}
return E.Right[error](tailrec.Bounce[int](n - 1))
}
}
}
countdown := TailRec(countdownStep)
result := countdown(largeN)(context.Background())()
assert.Equal(t, E.Of[error](0), result)
}
func TestTailRec_StackSafetyWithCancellation(t *testing.T) {
// Test stack safety with cancellation after many iterations
const largeN = 100000
var iterationCount int32
countdownStep := func(n int) ReaderIOResult[Trampoline[int, int]] {
return func(ctx context.Context) IOEither[Trampoline[int, int]] {
return func() Either[Trampoline[int, int]] {
atomic.AddInt32(&iterationCount, 1)
// Add a small delay every 1000 iterations to make cancellation more likely
if n%1000 == 0 {
time.Sleep(1 * time.Millisecond)
}
if n <= 0 {
return E.Right[error](tailrec.Land[int](0))
}
return E.Right[error](tailrec.Bounce[int](n - 1))
}
}
}
countdown := TailRec(countdownStep)
// Cancel after 50ms to allow some iterations but not all
ctx, cancel := context.WithTimeout(context.Background(), 50*time.Millisecond)
defer cancel()
result := countdown(largeN)(ctx)()
// Should be cancelled (or completed if very fast)
// The key is that it doesn't cause a stack overflow
iterations := atomic.LoadInt32(&iterationCount)
assert.Greater(t, iterations, int32(0))
// If it was cancelled, verify it didn't complete all iterations
if E.IsLeft(result) {
assert.Less(t, iterations, int32(largeN))
}
}
func TestTailRec_ComplexState(t *testing.T) {
// Test with more complex state management
type ProcessState struct {
items []string
processed []string
errors []error
}
processStep := func(state ProcessState) ReaderIOResult[Trampoline[ProcessState, []string]] {
return func(ctx context.Context) IOEither[Trampoline[ProcessState, []string]] {
return func() Either[Trampoline[ProcessState, []string]] {
if A.IsEmpty(state.items) {
return E.Right[error](tailrec.Land[ProcessState](state.processed))
}
item := state.items[0]
// Simulate processing that might fail for certain items
if item == "error-item" {
return E.Left[Trampoline[ProcessState, []string]](
fmt.Errorf("failed to process item: %s", item))
}
return E.Right[error](tailrec.Bounce[[]string](ProcessState{
items: state.items[1:],
processed: append(state.processed, item),
errors: state.errors,
}))
}
}
}
processItems := TailRec(processStep)
t.Run("successful processing", func(t *testing.T) {
initialState := ProcessState{
items: []string{"item1", "item2", "item3"},
processed: []string{},
errors: []error{},
}
result := processItems(initialState)(context.Background())()
assert.Equal(t, E.Of[error]([]string{"item1", "item2", "item3"}), result)
})
t.Run("processing with error", func(t *testing.T) {
initialState := ProcessState{
items: []string{"item1", "error-item", "item3"},
processed: []string{},
errors: []error{},
}
result := processItems(initialState)(context.Background())()
assert.True(t, E.IsLeft(result))
err := E.ToError(result)
assert.Contains(t, err.Error(), "failed to process item: error-item")
})
}
func TestTailRec_CancellationDuringProcessing(t *testing.T) {
// Test cancellation during a realistic processing scenario
type FileProcessState struct {
files []string
processed int
}
var processedCount int32
processFileStep := func(state FileProcessState) ReaderIOResult[Trampoline[FileProcessState, int]] {
return func(ctx context.Context) IOEither[Trampoline[FileProcessState, int]] {
return func() Either[Trampoline[FileProcessState, int]] {
if A.IsEmpty(state.files) {
return E.Right[error](tailrec.Land[FileProcessState](state.processed))
}
// Simulate file processing time
time.Sleep(20 * time.Millisecond)
atomic.AddInt32(&processedCount, 1)
return E.Right[error](tailrec.Bounce[int](FileProcessState{
files: state.files[1:],
processed: state.processed + 1,
}))
}
}
}
processFiles := TailRec(processFileStep)
// Create many files to process
files := make([]string, 20)
for i := range files {
files[i] = fmt.Sprintf("file%d.txt", i)
}
initialState := FileProcessState{
files: files,
processed: 0,
}
// Cancel after 100ms (should allow ~5 files to be processed)
ctx, cancel := context.WithTimeout(context.Background(), 100*time.Millisecond)
defer cancel()
start := time.Now()
result := processFiles(initialState)(ctx)()
elapsed := time.Since(start)
// Should be cancelled
assert.True(t, E.IsLeft(result))
// Should complete quickly due to cancellation
assert.Less(t, elapsed, 150*time.Millisecond)
// Should have processed some but not all files
processed := atomic.LoadInt32(&processedCount)
assert.Greater(t, processed, int32(0))
assert.Less(t, processed, int32(20))
}
func TestTailRec_ZeroIterations(t *testing.T) {
// Test case where recursion terminates immediately
immediateStep := func(n int) ReaderIOResult[Trampoline[int, string]] {
return func(ctx context.Context) IOEither[Trampoline[int, string]] {
return func() Either[Trampoline[int, string]] {
return E.Right[error](tailrec.Land[int]("immediate"))
}
}
}
immediate := TailRec(immediateStep)
result := immediate(100)(context.Background())()
assert.Equal(t, E.Of[error]("immediate"), result)
}
func TestTailRec_ContextWithDeadline(t *testing.T) {
// Test with context deadline
var iterationCount int32
slowStep := func(n int) ReaderIOResult[Trampoline[int, string]] {
return func(ctx context.Context) IOEither[Trampoline[int, string]] {
return func() Either[Trampoline[int, string]] {
atomic.AddInt32(&iterationCount, 1)
time.Sleep(30 * time.Millisecond)
if n <= 0 {
return E.Right[error](tailrec.Land[int]("Done!"))
}
return E.Right[error](tailrec.Bounce[string](n - 1))
}
}
}
slowRecursion := TailRec(slowStep)
// Set deadline 80ms from now
ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(80*time.Millisecond))
defer cancel()
result := slowRecursion(10)(ctx)()
// Should be cancelled due to deadline
assert.True(t, E.IsLeft(result))
// Should have executed only a few iterations
iterations := atomic.LoadInt32(&iterationCount)
assert.Greater(t, iterations, int32(0))
assert.Less(t, iterations, int32(5))
}
func TestTailRec_ContextWithValue(t *testing.T) {
// Test that context values are preserved through recursion
type contextKey string
const testKey contextKey = "test"
valueStep := func(n int) ReaderIOResult[Trampoline[int, string]] {
return func(ctx context.Context) IOEither[Trampoline[int, string]] {
return func() Either[Trampoline[int, string]] {
value := ctx.Value(testKey)
require.NotNil(t, value)
assert.Equal(t, "test-value", value.(string))
if n <= 0 {
return E.Right[error](tailrec.Land[int]("Done!"))
}
return E.Right[error](tailrec.Bounce[string](n - 1))
}
}
}
valueRecursion := TailRec(valueStep)
ctx := context.WithValue(context.Background(), testKey, "test-value")
result := valueRecursion(3)(ctx)()
assert.Equal(t, E.Of[error]("Done!"), result)
}

View File

@@ -16,7 +16,11 @@
package readerioresult
import (
"context"
"io"
RIOR "github.com/IBM/fp-go/v2/readerioresult"
"github.com/IBM/fp-go/v2/result"
)
// WithResource constructs a function that creates a resource, then operates on it and then releases the resource.
@@ -55,3 +59,111 @@ import (
func WithResource[A, R, ANY any](onCreate ReaderIOResult[R], onRelease Kleisli[R, ANY]) Kleisli[Kleisli[R, A], A] {
return RIOR.WithResource[A](onCreate, onRelease)
}
// onClose is a helper function that creates a ReaderIOResult for closing an io.Closer resource.
// It safely calls the Close() method and handles any errors that may occur during closing.
//
// Type Parameters:
// - A: Must implement io.Closer interface
//
// Parameters:
// - a: The resource to close
//
// Returns:
// - ReaderIOResult[any]: A computation that closes the resource and returns nil on success
//
// The function ignores the context parameter since closing operations typically don't need context.
// Any error from Close() is captured and returned as a Result error.
func onClose[A io.Closer](a A) ReaderIOResult[any] {
return func(_ context.Context) IOResult[any] {
return func() Result[any] {
return result.TryCatchError[any](nil, a.Close())
}
}
}
// WithCloser creates a resource management function specifically for io.Closer resources.
// This is a specialized version of WithResource that automatically handles closing of resources
// that implement the io.Closer interface.
//
// The function ensures that:
// - The resource is created using the onCreate function
// - The resource is automatically closed when the operation completes (success or failure)
// - Any errors during closing are properly handled
// - The resource is closed even if the main operation fails or the context is canceled
//
// Type Parameters:
// - B: The type of value returned by the resource-using function
// - A: The type of resource that implements io.Closer
//
// Parameters:
// - onCreate: ReaderIOResult that creates the io.Closer resource
//
// Returns:
// - A function that takes a resource-using function and returns a ReaderIOResult[B]
//
// Example with file operations:
//
// openFile := func(filename string) ReaderIOResult[*os.File] {
// return TryCatch(func(ctx context.Context) func() (*os.File, error) {
// return func() (*os.File, error) {
// return os.Open(filename)
// }
// })
// }
//
// fileReader := WithCloser(openFile("data.txt"))
// result := fileReader(func(f *os.File) ReaderIOResult[string] {
// return TryCatch(func(ctx context.Context) func() (string, error) {
// return func() (string, error) {
// data, err := io.ReadAll(f)
// return string(data), err
// }
// })
// })
//
// Example with HTTP response:
//
// httpGet := func(url string) ReaderIOResult[*http.Response] {
// return TryCatch(func(ctx context.Context) func() (*http.Response, error) {
// return func() (*http.Response, error) {
// return http.Get(url)
// }
// })
// }
//
// responseReader := WithCloser(httpGet("https://api.example.com/data"))
// result := responseReader(func(resp *http.Response) ReaderIOResult[[]byte] {
// return TryCatch(func(ctx context.Context) func() ([]byte, error) {
// return func() ([]byte, error) {
// return io.ReadAll(resp.Body)
// }
// })
// })
//
// Example with database connection:
//
// openDB := func(dsn string) ReaderIOResult[*sql.DB] {
// return TryCatch(func(ctx context.Context) func() (*sql.DB, error) {
// return func() (*sql.DB, error) {
// return sql.Open("postgres", dsn)
// }
// })
// }
//
// dbQuery := WithCloser(openDB("postgres://..."))
// result := dbQuery(func(db *sql.DB) ReaderIOResult[[]User] {
// return TryCatch(func(ctx context.Context) func() ([]User, error) {
// return func() ([]User, error) {
// rows, err := db.QueryContext(ctx, "SELECT * FROM users")
// if err != nil {
// return nil, err
// }
// defer rows.Close()
// return scanUsers(rows)
// }
// })
// })
func WithCloser[B any, A io.Closer](onCreate ReaderIOResult[A]) Kleisli[Kleisli[A, B], B] {
return WithResource[B](onCreate, onClose[A])
}

View File

@@ -0,0 +1,181 @@
// Copyright (c) 2023 - 2025 IBM Corp.
// All rights reserved.
//
// Licensed under the Apache LicensVersion 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package readerioresult
import (
"context"
"time"
RIO "github.com/IBM/fp-go/v2/context/readerio"
R "github.com/IBM/fp-go/v2/retry"
RG "github.com/IBM/fp-go/v2/retry/generic"
)
// Retrying retries a ReaderIOResult computation according to a retry policy with context awareness.
//
// This function implements a retry mechanism for operations that depend on a [context.Context],
// perform side effects (IO), and can fail (Result). It respects context cancellation, meaning
// that if the context is cancelled during retry delays, the operation will stop immediately
// and return the cancellation error.
//
// The retry loop will continue until one of the following occurs:
// - The action succeeds and the check function returns false (no retry needed)
// - The retry policy returns None (retry limit reached)
// - The check function returns false (indicating success or a non-retryable failure)
// - The context is cancelled (returns context.Canceled or context.DeadlineExceeded)
//
// Parameters:
//
// - policy: A RetryPolicy that determines when and how long to wait between retries.
// The policy receives a RetryStatus on each iteration and returns an optional delay.
// If it returns None, retrying stops. Common policies include LimitRetries,
// ExponentialBackoff, and CapDelay from the retry package.
//
// - action: A Kleisli arrow that takes a RetryStatus and returns a ReaderIOResult[A].
// This function is called on each retry attempt and receives information about the
// current retry state (iteration number, cumulative delay, etc.). The action depends
// on a context.Context and produces a Result[A]. The context passed to the action
// will be the same context used for retry delays, so cancellation is properly propagated.
//
// - check: A predicate function that examines the Result[A] and returns true if the
// operation should be retried, or false if it should stop. This allows you to
// distinguish between retryable failures (e.g., network timeouts) and permanent
// failures (e.g., invalid input). Note that context cancellation errors will
// automatically stop retrying regardless of this function's return value.
//
// Returns:
//
// A ReaderIOResult[A] that, when executed with a context, will perform the retry
// logic with context cancellation support and return the final result.
//
// Type Parameters:
// - A: The type of the success value
//
// Context Cancellation:
//
// The retry mechanism respects context cancellation in two ways:
// 1. During retry delays: If the context is cancelled while waiting between retries,
// the operation stops immediately and returns the context error.
// 2. During action execution: If the action itself checks the context and returns
// an error due to cancellation, the retry loop will stop (assuming the check
// function doesn't force a retry on context errors).
//
// Example:
//
// // Create a retry policy: exponential backoff with a cap, limited to 5 retries
// policy := M.Concat(
// retry.LimitRetries(5),
// retry.CapDelay(10*time.Second, retry.ExponentialBackoff(100*time.Millisecond)),
// )(retry.Monoid)
//
// // Action that fetches data, with retry status information
// fetchData := func(status retry.RetryStatus) ReaderIOResult[string] {
// return func(ctx context.Context) IOResult[string] {
// return func() Result[string] {
// // Check if context is cancelled
// if ctx.Err() != nil {
// return result.Left[string](ctx.Err())
// }
// // Simulate an HTTP request that might fail
// if status.IterNumber < 3 {
// return result.Left[string](fmt.Errorf("temporary error"))
// }
// return result.Of("success")
// }
// }
// }
//
// // Check function: retry on any error except context cancellation
// shouldRetry := func(r Result[string]) bool {
// return result.IsLeft(r) && !errors.Is(result.GetLeft(r), context.Canceled)
// }
//
// // Create the retrying computation
// retryingFetch := Retrying(policy, fetchData, shouldRetry)
//
// // Execute with a cancellable context
// ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
// defer cancel()
// ioResult := retryingFetch(ctx)
// finalResult := ioResult()
//
// See also:
// - retry.RetryPolicy for available retry policies
// - retry.RetryStatus for information passed to the action
// - context.Context for context cancellation semantics
//
//go:inline
func Retrying[A any](
policy R.RetryPolicy,
action Kleisli[R.RetryStatus, A],
check Predicate[Result[A]],
) ReaderIOResult[A] {
// delayWithCancel implements a context-aware delay mechanism for retry operations.
// It creates a timeout context that will be cancelled when either:
// 1. The delay duration expires (normal case), or
// 2. The parent context is cancelled (early termination)
//
// The function waits on timeoutCtx.Done(), which will be signaled in either case:
// - If the delay expires, timeoutCtx is cancelled by the timeout
// - If the parent ctx is cancelled, timeoutCtx inherits the cancellation
//
// After the wait completes, we dispatch to the next action by calling ri(ctx)().
// This works correctly because the action is wrapped in WithContextK, which handles
// context cancellation by checking ctx.Err() and returning an appropriate error
// (context.Canceled or context.DeadlineExceeded) when the context is cancelled.
//
// This design ensures that:
// - Retry delays respect context cancellation and terminate immediately
// - The cancellation error propagates correctly through the retry chain
// - No unnecessary delays occur when the context is already cancelled
delayWithCancel := func(delay time.Duration) RIO.Operator[R.RetryStatus, R.RetryStatus] {
return func(ri ReaderIO[R.RetryStatus]) ReaderIO[R.RetryStatus] {
return func(ctx context.Context) IO[R.RetryStatus] {
return func() R.RetryStatus {
// Create a timeout context that will be cancelled when either:
// - The delay duration expires, or
// - The parent context is cancelled
timeoutCtx, cancelTimeout := context.WithTimeout(ctx, delay)
defer cancelTimeout()
// Wait for either the timeout or parent context cancellation
<-timeoutCtx.Done()
// Dispatch to the next action with the original context.
// WithContextK will handle context cancellation correctly.
return ri(ctx)()
}
}
}
}
// get an implementation for the types
return RG.Retrying(
RIO.Chain[Result[A], Trampoline[R.RetryStatus, Result[A]]],
RIO.Map[R.RetryStatus, Trampoline[R.RetryStatus, Result[A]]],
RIO.Of[Trampoline[R.RetryStatus, Result[A]]],
RIO.Of[R.RetryStatus],
delayWithCancel,
RIO.TailRec,
policy,
WithContextK(action),
check,
)
}

View File

@@ -0,0 +1,511 @@
// Copyright (c) 2025 IBM Corp.
// All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package readerioresult
import (
"context"
"errors"
"fmt"
"testing"
"time"
"github.com/IBM/fp-go/v2/result"
R "github.com/IBM/fp-go/v2/retry"
"github.com/stretchr/testify/assert"
)
// Helper function to create a test retry policy
func testRetryPolicy() R.RetryPolicy {
return R.Monoid.Concat(
R.LimitRetries(5),
R.CapDelay(1*time.Second, R.ExponentialBackoff(10*time.Millisecond)),
)
}
// TestRetrying_SuccessOnFirstAttempt tests that Retrying succeeds immediately
// when the action succeeds on the first attempt.
func TestRetrying_SuccessOnFirstAttempt(t *testing.T) {
policy := testRetryPolicy()
action := func(status R.RetryStatus) ReaderIOResult[string] {
return func(ctx context.Context) IOResult[string] {
return func() Result[string] {
return result.Of("success")
}
}
}
check := func(r Result[string]) bool {
return result.IsLeft(r)
}
retrying := Retrying(policy, action, check)
ctx := t.Context()
res := retrying(ctx)()
assert.Equal(t, result.Of("success"), res)
}
// TestRetrying_SuccessAfterRetries tests that Retrying eventually succeeds
// after a few failed attempts.
func TestRetrying_SuccessAfterRetries(t *testing.T) {
policy := testRetryPolicy()
action := func(status R.RetryStatus) ReaderIOResult[string] {
return func(ctx context.Context) IOResult[string] {
return func() Result[string] {
// Fail on first 3 attempts, succeed on 4th
if status.IterNumber < 3 {
return result.Left[string](fmt.Errorf("attempt %d failed", status.IterNumber))
}
return result.Of(fmt.Sprintf("success on attempt %d", status.IterNumber))
}
}
}
check := func(r Result[string]) bool {
return result.IsLeft(r)
}
retrying := Retrying(policy, action, check)
ctx := t.Context()
res := retrying(ctx)()
assert.Equal(t, result.Of("success on attempt 3"), res)
}
// TestRetrying_ExhaustsRetries tests that Retrying stops after the retry limit
// is reached and returns the last error.
func TestRetrying_ExhaustsRetries(t *testing.T) {
policy := R.LimitRetries(3)
action := func(status R.RetryStatus) ReaderIOResult[string] {
return func(ctx context.Context) IOResult[string] {
return func() Result[string] {
return result.Left[string](fmt.Errorf("attempt %d failed", status.IterNumber))
}
}
}
check := func(r Result[string]) bool {
return result.IsLeft(r)
}
retrying := Retrying(policy, action, check)
ctx := t.Context()
res := retrying(ctx)()
assert.True(t, result.IsLeft(res))
assert.Equal(t, result.Left[string](fmt.Errorf("attempt 3 failed")), res)
}
// TestRetrying_ActionChecksContextCancellation tests that actions can check
// the context and return early if it's cancelled.
func TestRetrying_ActionChecksContextCancellation(t *testing.T) {
policy := R.LimitRetries(10)
attemptCount := 0
action := func(status R.RetryStatus) ReaderIOResult[string] {
return func(ctx context.Context) IOResult[string] {
return func() Result[string] {
attemptCount++
// Check context at the start of the action
if ctx.Err() != nil {
return result.Left[string](ctx.Err())
}
// Simulate work that might take time
time.Sleep(10 * time.Millisecond)
// Check context again after work
if ctx.Err() != nil {
return result.Left[string](ctx.Err())
}
// Always fail to trigger retries
return result.Left[string](fmt.Errorf("attempt %d failed", status.IterNumber))
}
}
}
check := func(r Result[string]) bool {
// Don't retry on context errors
val, err := result.Unwrap(r)
_ = val
if err != nil && (errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded)) {
return false
}
return result.IsLeft(r)
}
retrying := Retrying(policy, action, check)
// Create a context that we'll cancel after a short time
ctx, cancel := context.WithCancel(t.Context())
// Start the retry operation in a goroutine
resultChan := make(chan Result[string], 1)
go func() {
res := retrying(ctx)()
resultChan <- res
}()
// Cancel the context after allowing a couple attempts
time.Sleep(50 * time.Millisecond)
cancel()
// Wait for the result
res := <-resultChan
// Should have stopped due to context cancellation
assert.True(t, result.IsLeft(res))
// Should have stopped early (not all 10 attempts)
assert.Less(t, attemptCount, 10, "Should stop retrying when action detects context cancellation")
// The error should be related to context cancellation or an early attempt
val, err := result.Unwrap(res)
_ = val
assert.Error(t, err)
}
// TestRetrying_ContextCancelledBeforeStart tests that if the context is already
// cancelled before starting, the operation fails immediately.
func TestRetrying_ContextCancelledBeforeStart(t *testing.T) {
policy := testRetryPolicy()
attemptCount := 0
action := func(status R.RetryStatus) ReaderIOResult[string] {
return func(ctx context.Context) IOResult[string] {
return func() Result[string] {
attemptCount++
// Check context before doing work
if ctx.Err() != nil {
return result.Left[string](ctx.Err())
}
return result.Left[string](fmt.Errorf("attempt %d failed", status.IterNumber))
}
}
}
check := func(r Result[string]) bool {
// Don't retry on context errors
val, err := result.Unwrap(r)
_ = val
if err != nil && errors.Is(err, context.Canceled) {
return false
}
return result.IsLeft(r)
}
retrying := Retrying(policy, action, check)
// Create an already-cancelled context
ctx, cancel := context.WithCancel(t.Context())
cancel()
res := retrying(ctx)()
assert.True(t, result.IsLeft(res))
val, err := result.Unwrap(res)
_ = val
assert.True(t, errors.Is(err, context.Canceled))
// Should have attempted at most once
assert.LessOrEqual(t, attemptCount, 1)
}
// TestRetrying_ContextTimeoutInAction tests that actions respect context deadlines.
func TestRetrying_ContextTimeoutInAction(t *testing.T) {
policy := R.LimitRetries(10)
attemptCount := 0
action := func(status R.RetryStatus) ReaderIOResult[string] {
return func(ctx context.Context) IOResult[string] {
return func() Result[string] {
attemptCount++
// Check context before doing work
if ctx.Err() != nil {
return result.Left[string](ctx.Err())
}
// Simulate some work
time.Sleep(50 * time.Millisecond)
// Check context after work
if ctx.Err() != nil {
return result.Left[string](ctx.Err())
}
// Always fail to trigger retries
return result.Left[string](fmt.Errorf("attempt %d failed", status.IterNumber))
}
}
}
check := func(r Result[string]) bool {
// Don't retry on context errors
val, err := result.Unwrap(r)
_ = val
if err != nil && (errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded)) {
return false
}
return result.IsLeft(r)
}
retrying := Retrying(policy, action, check)
// Create a context with a short timeout
ctx, cancel := context.WithTimeout(t.Context(), 150*time.Millisecond)
defer cancel()
startTime := time.Now()
res := retrying(ctx)()
elapsed := time.Since(startTime)
assert.True(t, result.IsLeft(res))
// Should have stopped before completing all 10 retries
assert.Less(t, attemptCount, 10, "Should stop retrying when action detects context timeout")
// Should have stopped around the timeout duration
assert.Less(t, elapsed, 500*time.Millisecond, "Should stop soon after timeout")
}
// TestRetrying_CheckFunctionStopsRetry tests that the check function can
// stop retrying even when errors occur.
func TestRetrying_CheckFunctionStopsRetry(t *testing.T) {
policy := testRetryPolicy()
action := func(status R.RetryStatus) ReaderIOResult[string] {
return func(ctx context.Context) IOResult[string] {
return func() Result[string] {
if status.IterNumber == 0 {
return result.Left[string](fmt.Errorf("retryable error"))
}
return result.Left[string](fmt.Errorf("permanent error"))
}
}
}
// Only retry on "retryable error"
check := func(r Result[string]) bool {
return result.IsLeft(r) && result.Fold(
func(err error) bool { return err.Error() == "retryable error" },
func(string) bool { return false },
)(r)
}
retrying := Retrying(policy, action, check)
ctx := t.Context()
res := retrying(ctx)()
assert.Equal(t, result.Left[string](fmt.Errorf("permanent error")), res)
}
// TestRetrying_ExponentialBackoff tests that exponential backoff is applied.
func TestRetrying_ExponentialBackoff(t *testing.T) {
// Use a policy with measurable delays
policy := R.Monoid.Concat(
R.LimitRetries(3),
R.ExponentialBackoff(50*time.Millisecond),
)
startTime := time.Now()
action := func(status R.RetryStatus) ReaderIOResult[string] {
return func(ctx context.Context) IOResult[string] {
return func() Result[string] {
if status.IterNumber < 2 {
return result.Left[string](fmt.Errorf("retry"))
}
return result.Of("success")
}
}
}
check := func(r Result[string]) bool {
return result.IsLeft(r)
}
retrying := Retrying(policy, action, check)
ctx := t.Context()
res := retrying(ctx)()
elapsed := time.Since(startTime)
assert.Equal(t, result.Of("success"), res)
// With exponential backoff starting at 50ms:
// Iteration 0: no delay
// Iteration 1: 50ms delay
// Iteration 2: 100ms delay
// Total should be at least 150ms
assert.GreaterOrEqual(t, elapsed, 150*time.Millisecond)
}
// TestRetrying_ContextValuePropagation tests that context values are properly
// propagated through the retry mechanism.
func TestRetrying_ContextValuePropagation(t *testing.T) {
policy := R.LimitRetries(2)
type contextKey string
const requestIDKey contextKey = "requestID"
action := func(status R.RetryStatus) ReaderIOResult[string] {
return func(ctx context.Context) IOResult[string] {
return func() Result[string] {
// Extract value from context
requestID, ok := ctx.Value(requestIDKey).(string)
if !ok {
return result.Left[string](fmt.Errorf("missing request ID"))
}
if status.IterNumber < 1 {
return result.Left[string](fmt.Errorf("retry needed"))
}
return result.Of(fmt.Sprintf("processed request %s", requestID))
}
}
}
check := func(r Result[string]) bool {
return result.IsLeft(r)
}
retrying := Retrying(policy, action, check)
// Create context with a value
ctx := context.WithValue(t.Context(), requestIDKey, "12345")
res := retrying(ctx)()
assert.Equal(t, result.Of("processed request 12345"), res)
}
// TestRetrying_RetryStatusProgression tests that the RetryStatus is properly
// updated on each iteration.
func TestRetrying_RetryStatusProgression(t *testing.T) {
policy := testRetryPolicy()
var iterations []uint
action := func(status R.RetryStatus) ReaderIOResult[int] {
return func(ctx context.Context) IOResult[int] {
return func() Result[int] {
iterations = append(iterations, status.IterNumber)
if status.IterNumber < 3 {
return result.Left[int](fmt.Errorf("retry"))
}
return result.Of(int(status.IterNumber))
}
}
}
check := func(r Result[int]) bool {
return result.IsLeft(r)
}
retrying := Retrying(policy, action, check)
ctx := t.Context()
res := retrying(ctx)()
assert.Equal(t, result.Of(3), res)
// Should have attempted iterations 0, 1, 2, 3
assert.Equal(t, []uint{0, 1, 2, 3}, iterations)
}
// TestRetrying_ContextCancelledDuringDelay tests that the retry operation
// stops immediately when the context is cancelled during a retry delay,
// even if there are still retries remaining according to the policy.
func TestRetrying_ContextCancelledDuringDelay(t *testing.T) {
// Use a policy with significant delays to ensure we can cancel during the delay
policy := R.Monoid.Concat(
R.LimitRetries(10),
R.ConstantDelay(200*time.Millisecond),
)
attemptCount := 0
action := func(status R.RetryStatus) ReaderIOResult[string] {
return func(ctx context.Context) IOResult[string] {
return func() Result[string] {
attemptCount++
// Always fail to trigger retries
return result.Left[string](fmt.Errorf("attempt %d failed", status.IterNumber))
}
}
}
// Always retry on errors (don't check for context cancellation in check function)
check := func(r Result[string]) bool {
return result.IsLeft(r)
}
retrying := Retrying(policy, action, check)
// Create a context that we'll cancel during the retry delay
ctx, cancel := context.WithCancel(t.Context())
// Start the retry operation in a goroutine
resultChan := make(chan Result[string], 1)
startTime := time.Now()
go func() {
res := retrying(ctx)()
resultChan <- res
}()
// Wait for the first attempt to complete and the delay to start
time.Sleep(50 * time.Millisecond)
// Cancel the context during the retry delay
cancel()
// Wait for the result
res := <-resultChan
elapsed := time.Since(startTime)
// Should have stopped due to context cancellation
assert.True(t, result.IsLeft(res))
// Should have attempted only once or twice (not all 10 attempts)
// because the context was cancelled during the delay
assert.LessOrEqual(t, attemptCount, 2, "Should stop retrying when context is cancelled during delay")
// Should have stopped quickly after cancellation, not waiting for all delays
// With 10 retries and 200ms delays, it would take ~2 seconds without cancellation
// With cancellation during first delay, it should complete in well under 500ms
assert.Less(t, elapsed, 500*time.Millisecond, "Should stop immediately when context is cancelled during delay")
// When context is cancelled during the delay, the retry mechanism
// detects the cancellation and returns a context error
val, err := result.Unwrap(res)
_ = val
assert.Error(t, err)
// The error should be a context cancellation error since cancellation
// happened during the delay between retries
assert.True(t, errors.Is(err, context.Canceled), "Should return context.Canceled when cancelled during delay")
}

Some files were not shown because too many files have changed in this diff Show More