1
0
mirror of https://github.com/kellyjonbrazil/jc.git synced 2026-04-03 17:44:07 +02:00

Compare commits

...

125 Commits

Author SHA1 Message Date
Kelly Brazil
2bccd14c5d Merge pull request #499 from kellyjonbrazil/dev
v1.24.0
2023-12-17 18:08:02 +00:00
Kelly Brazil
0d4823c9de doc update 2023-12-17 09:44:18 -08:00
Kelly Brazil
2a14f56b72 add proc-cmdline tests 2023-12-16 13:18:14 -08:00
Kelly Brazil
fe49759598 doc update 2023-12-16 12:59:35 -08:00
Kelly Brazil
ee737a59eb doc update 2023-12-16 12:55:29 -08:00
Kelly Brazil
517ab10930 add proc-cmdline parser 2023-12-16 12:09:37 -08:00
Kelly Brazil
604fb574be doc update 2023-12-16 11:44:11 -08:00
Kelly Brazil
a254ee8d88 add test for issue 490 2023-12-16 11:39:40 -08:00
Kelly Brazil
f784a7a76d doc update 2023-12-10 13:13:57 -08:00
Kelly Brazil
2e33afbe18 doc cleanup 2023-12-10 10:53:28 -08:00
Kelly Brazil
103bb174fc rename pkg-index-alpine to pkg-index-apk 2023-12-10 10:49:05 -08:00
Kelly Brazil
2a76a64fa1 schema update 2023-12-10 10:41:20 -08:00
Kelly Brazil
c8fb56c601 formatting 2023-12-10 10:41:09 -08:00
Kelly Brazil
e835227027 doc update 2023-12-10 10:30:05 -08:00
Kelly Brazil
88ffcaee56 update schema. fix no-data output to match other parsers. 2023-12-10 10:29:58 -08:00
Kelly Brazil
a9ba98847c update _device_pattern regex for Freezing with 100% CPU when parsing Xrandr output - 1.23.6 #490 2023-12-09 17:01:03 -08:00
Kelly Brazil
2630049ab7 possible fix for infinite loop issue 2023-12-09 16:14:27 -08:00
Kelly Brazil
47c7e081f3 doc update 2023-12-09 11:45:26 -08:00
Kelly Brazil
ef7f755614 doc update 2023-12-09 11:41:45 -08:00
Kelly Brazil
32bd7ffbf6 formatting updates 2023-12-09 11:41:22 -08:00
Kelly Brazil
347097a294 add convert_size_to_int function 2023-12-09 11:41:06 -08:00
Kelly Brazil
356857f5d6 rename deb-packages-index to pkg-index-deb 2023-12-09 10:17:48 -08:00
Kelly Brazil
ee12c52291 rename apkindex parser to pkg-index-alpine 2023-12-09 10:03:33 -08:00
Ron Green
0e7ebf4dc1 feat(iftop): add iftop-scanning (#484)
* feat(iftop): add iftop-scanning

this is not even an MVP, but I would like it to exist to allow per client json aggregation

also, a future use is a stream response

* fix typos and test first regex

* add more iftop fun

* Update iftop.py

* add tests and json

Signed-off-by: Ron Green <11993626+georgettica@users.noreply.github.com>

* feat: make work and add tests

Signed-off-by: Ron Green <11993626+georgettica@users.noreply.github.com>

* add completion

* change schema for query looping

* fix: tests

* fix review comments

* feat: add byte parsing

* add no-port to options

* remove completion and format dep

Signed-off-by: Ron Green <11993626+georgettica@users.noreply.github.com>

* Update setup.py

* Update iftop.py

---------

Signed-off-by: Ron Green <11993626+georgettica@users.noreply.github.com>
Co-authored-by: Kelly Brazil <kellyjonbrazil@gmail.com>
2023-12-07 18:22:53 -08:00
Kelly Brazil
f1e0cec9d6 add apkindex parser 2023-12-04 14:01:30 -08:00
Roey Darwish Dror
d96a2a8623 APKINDEX parser (#487) (#491)
* APKINDEX parser (#487)

* Missing space in doc
2023-12-04 13:35:04 -08:00
Hugo van Kemenade
1b1bc46222 Remove redundant Python 2 code (#493)
Co-authored-by: Kelly Brazil <kellyjonbrazil@gmail.com>
2023-12-04 13:03:15 -08:00
Kelly Brazil
d5a8b4eed2 remove deprecated iso-datetime parser 2023-12-04 11:35:37 -08:00
Kelly Brazil
5ddd4f0e86 version bump to 1.24.0 2023-12-04 11:33:04 -08:00
Kelly Brazil
8b94c326de add no data test 2023-12-04 09:59:28 -08:00
Hugo van Kemenade
29b012e66d Add support for Python 3.12 (#492)
Co-authored-by: Kelly Brazil <kellyjonbrazil@gmail.com>
2023-12-04 09:55:07 -08:00
Kelly Brazil
572a3207cd doc update 2023-11-28 11:23:47 -08:00
Kelly Brazil
2a88f2be6b fix mypy issues 2023-11-28 11:21:13 -08:00
Roey Darwish Dror
3de6eac1ad swapon parser (#383) (#489)
* swapon parser

* revert lib

* fix lib

* Added tests

* Fix tests

---------

Co-authored-by: Kelly Brazil <kellyjonbrazil@gmail.com>
2023-11-28 11:15:40 -08:00
Kelly Brazil
f44260603e doc update 2023-11-24 10:38:33 -08:00
Kelly Brazil
1c60f5355e convert one more field to integer 2023-11-24 09:45:56 -08:00
Kelly Brazil
40fa78a966 version bump 2023-11-24 09:44:03 -08:00
Himadri Bhattacharjee
71db67ef49 refactor: acpi parser: adhere code to the happy path to avoid nested branches (#488)
* refactor: acpi parser: keep working code in happy path to avoid nested branches

* fix: use elif for branches marked charging and discharging

---------

Co-authored-by: Kelly Brazil <kellyjonbrazil@gmail.com>
2023-11-24 09:38:10 -08:00
Kelly Brazil
1cd723b48f add tests 2023-11-23 15:19:14 -08:00
Kelly Brazil
82ee4d7b30 add debconf-show parser 2023-11-23 14:52:06 -08:00
Kelly Brazil
dfd19f38f3 doc update 2023-11-23 12:12:05 -08:00
Kelly Brazil
2358c883d0 remove unused type annotation import 2023-11-23 12:07:54 -08:00
Kelly Brazil
79e4f3a761 doc update 2023-11-23 12:05:03 -08:00
Kelly Brazil
5be45622cc Merge branch 'dev' of https://github.com/kellyjonbrazil/jc into dev 2023-11-23 12:01:54 -08:00
Kelly Brazil
5f4136b943 run tests 2023-11-23 12:00:58 -08:00
Kelly Brazil
941bfe2724 update szenius/set-timezone to v1.2 2023-11-23 11:59:12 -08:00
Kelly Brazil
3ed44a26d9 doc update 2023-11-23 11:51:04 -08:00
Kelly Brazil
b7270517bd add tune2fs parser 2023-11-23 11:49:04 -08:00
Kelly Brazil
bf63ac93c6 add Photon linux 2023-11-21 15:01:04 -08:00
Kelly Brazil
1cb80f15c2 add tests 2023-11-21 14:51:37 -08:00
Kelly Brazil
b5c22c6e53 add deb-packages-index parser 2023-11-21 14:40:43 -08:00
Kelly Brazil
7951366117 remove old unused line 2023-11-17 12:06:43 -08:00
Kelly Brazil
c78a4bb655 mount fix for spaces in mountpoint name 2023-11-14 15:17:31 -08:00
Kelly Brazil
8aceda18b9 Merge pull request #483 from wolkenarchitekt/master
#482 Quickfix to parse mount output
2023-11-14 23:07:52 +00:00
Ingo Weinmann
e0c75a9b6b #482 Quickfix to parse mount output 2023-11-14 22:47:56 +01:00
Kelly Brazil
fd283f6cf7 doc update 2023-11-04 16:02:02 -07:00
Kelly Brazil
b881ad4ec0 make blank target null 2023-11-04 15:58:12 -07:00
Kelly Brazil
2fcb32e26f add debian/ubuntu package index support 2023-11-04 15:19:12 -07:00
Kelly Brazil
13a802225b add tests 2023-10-25 16:53:24 -07:00
Kelly Brazil
88649a4e8d doc update 2023-10-25 13:42:19 -07:00
Kelly Brazil
5c6fa5bff6 better header row detection 2023-10-25 13:37:59 -07:00
Kelly Brazil
b70025d6d6 fix for blank target in rule 2023-10-24 16:17:53 -07:00
Kelly Brazil
59b89ecbd4 version bump 2023-10-24 15:24:33 -07:00
Kelly Brazil
8f7502ff0f Merge pull request #479 from kellyjonbrazil/master
sync to dev
2023-10-24 22:22:31 +00:00
Kelly Brazil
249d93f15c Merge pull request #477 from kellyjonbrazil/dev
v1.23.6
2023-10-24 00:57:57 +00:00
Kelly Brazil
b0cf2e2d78 clean up final return 2023-10-23 17:37:53 -07:00
Kelly Brazil
264fcd40ad clear linedata if 'from' found 2023-10-23 17:36:25 -07:00
Kelly Brazil
54def8ef49 doc update 2023-10-23 15:41:11 -07:00
Kelly Brazil
63c271b837 add tests 2023-10-23 15:39:13 -07:00
Kelly Brazil
741b2d1c1d version bump 2023-10-23 15:15:07 -07:00
Kelly Brazil
47d4335890 fix for multi-word remote 2023-10-23 15:14:28 -07:00
Kelly Brazil
81f721f1ab doc update 2023-10-23 14:33:40 -07:00
Kelly Brazil
c4e1068895 move print statements 2023-10-23 14:06:04 -07:00
Kelly Brazil
a77bb4165a fix tests for different xmltodict versions 2023-10-23 12:49:06 -07:00
Kelly Brazil
3cd2dce496 formatting 2023-10-23 08:01:50 -07:00
Kelly Brazil
46a8978740 doc update 2023-10-23 07:54:14 -07:00
Kelly Brazil
3161c48939 fix for older xmltodict library versions 2023-10-23 07:53:39 -07:00
Kelly Brazil
a89a9187f8 version bump 2023-10-23 07:53:16 -07:00
Kelly Brazil
d9e0aa5b93 Merge pull request #475 from kellyjonbrazil/master
Merge pull request #473 from kellyjonbrazil/dev
2023-10-23 14:48:55 +00:00
Kelly Brazil
d298e101e9 Merge pull request #473 from kellyjonbrazil/dev
Dev v1.23.5
2023-10-21 12:23:25 -07:00
Kelly Brazil
cea975d7f1 doc update 2023-10-21 12:12:43 -07:00
Kelly Brazil
1ed69f9e6a doc update and fix tests 2023-10-21 12:09:18 -07:00
Kelly Brazil
ab0e05ec82 only set colors if pygments is installed 2023-10-21 12:01:09 -07:00
Kelly Brazil
c16cce4bf0 add tests 2023-10-13 08:52:14 -07:00
Kelly Brazil
d3489536a1 add "7" as a netstat raw state 2023-10-12 17:25:40 -07:00
Sebastian Uhlik
041050ce28 Fix bug in split when program running on UDP contains space in name (#447)
* Add condition before split.

* Safe detection of 'state' presence.

---------

Co-authored-by: Kelly Brazil <kellyjonbrazil@gmail.com>
2023-10-12 17:21:06 -07:00
pettai
7de1a8a5d6 add more tests (#468)
add all test-cases
2023-10-05 17:09:35 -07:00
Kelly Brazil
d4604743d1 add multiline value support to env parser 2023-10-02 16:30:56 -07:00
Kelly Brazil
0b8fb31298 doc update 2023-10-02 08:35:36 -07:00
Kelly Brazil
dcdd79e28c doc update 2023-10-02 08:34:14 -07:00
Kelly Brazil
5291baeb8e fixup variable names 2023-10-02 08:32:41 -07:00
Kelly Brazil
6867102c66 doc update 2023-10-01 18:13:27 -07:00
Kelly Brazil
36ed2c7e2e add lsb_release parser 2023-10-01 18:12:22 -07:00
Kelly Brazil
4ab0aba9d3 doc update 2023-10-01 17:42:42 -07:00
Kelly Brazil
e643badaf7 add os-release parser 2023-10-01 17:42:00 -07:00
Kelly Brazil
d96e96219e add comment support to xml parser 2023-10-01 11:49:50 -07:00
Kelly Brazil
e42af3353e fix pidstat parsers for -T ALL option 2023-10-01 11:25:56 -07:00
Kelly Brazil
4ec2b16f42 doc fix 2023-09-30 16:18:32 -07:00
Kelly Brazil
0a028456bf add nsd-control parser 2023-09-30 15:45:29 -07:00
pettai
a1f10928e1 Add nsd-control (#454)
* Create nsd_control.py

Init nsd_control.py

* cleanup nsd-control + add test data

- Cleanup nsd-control parser
- Add test data

* add test script

add test script + fix test data

* Update test_nsd_control.py

fix a default test

* Update test_nsd_control.py

nit
2023-09-30 15:36:52 -07:00
Kelly Brazil
eae1d4b89a doc update 2023-09-30 15:34:29 -07:00
Kelly Brazil
d3c7cec333 add host parser 2023-09-30 15:32:27 -07:00
pettai
36fa08d711 Add ISC 'host' support (#450)
* Add ISC 'host' support

Add ISC 'host' command support

* Update host.py

remove leading tab from string

* Add integer conversion

Per request, fix integer conversion

* Cleanup

Cleanup strip()'s

* Add tests

Add two tests for the 'host' parser

* Update test_host.py

nit

---------

Co-authored-by: Kelly Brazil <kellyjonbrazil@gmail.com>
2023-09-30 15:26:03 -07:00
Kelly Brazil
a9958841e4 doc update 2023-09-30 15:23:29 -07:00
Kelly Brazil
504ad81a01 version bump 2023-09-30 15:21:29 -07:00
Kevin Lyter
8bf2f4f4d0 [xrandr] Fix 453 devices issue (#455)
* [xrandr] Fix bug 453, clean up data model

* Fix: 'devices' was originally not a list, just assigned each time it
was parsed. Made that a list and appended to it.
* Removed distinction between unassociated/associated devices
* Added test for @marcin-koziol's problem
* Put tests into separate test methods

* Formatting cleanup

* Backwards compatible type syntax

---------

Co-authored-by: Kelly Brazil <kellyjonbrazil@gmail.com>
2023-09-30 15:19:14 -07:00
Kelly Brazil
805397ea18 doc update 2023-09-30 15:15:29 -07:00
Samson Umezulike
1b3985c2d7 Adds graceful handling of superfluous bits in bit strings (#459)
Co-authored-by: Kelly Brazil <kellyjonbrazil@gmail.com>
2023-09-30 15:12:18 -07:00
Kelly Brazil
f602043642 fix for negative serial numbers 2023-09-30 15:06:28 -07:00
Samson Umezulike
1a1aa8fda3 Adds graceful handling of negative serial numbers in x509 certificates (#445)
Co-authored-by: Kelly Brazil <kellyjonbrazil@gmail.com>
2023-09-30 15:02:55 -07:00
Kelly Brazil
3249a017ae add dest-unreachable test 2023-09-15 12:09:38 -07:00
Kelly Brazil
84f0246b2d move int/float conversions to _process 2023-09-14 18:15:28 -07:00
Kelly Brazil
1c795982b0 add error and corrupted fields to ping-s 2023-09-14 13:03:45 -07:00
Kelly Brazil
c5164b4108 doc update 2023-09-10 15:20:11 -07:00
Kelly Brazil
dc3716ecb3 add errors and corrupted support 2023-09-10 15:18:32 -07:00
Kelly Brazil
c5165ccc21 version bump 2023-09-10 15:18:23 -07:00
José Miguel Guzmán
5b2035e0e6 Add support for corrupted and errors in linux ping. (#442)
* Add support for corrupted and errors in linux ping.

* Fix regular expressions

* Workaround to keep compatibility with current tests

---------

Co-authored-by: Jose Miguel Guzman <jmguzman@whitestack.com>
Co-authored-by: Kelly Brazil <kellyjonbrazil@gmail.com>
2023-08-25 10:31:17 -07:00
Kelly Brazil
5205154aaf add percent_wait to float conversion 2023-08-21 17:18:07 -07:00
Kelly Brazil
f500de3af6 version bump 2023-08-21 17:17:34 -07:00
Kian-Meng Ang
4b028b5080 Fix typos (#440)
Found via `codespell -S ./tests/fixtures -L chage,respons,astroid,unx,ist,technik,ans,buildd`
2023-07-31 08:45:03 -07:00
Kelly Brazil
4cd721be85 Dev v1.23.4 (#439)
* version bump

* fix regex for crlf line endings

* Completed Ip_route parser (#429)

* tests

* Merge pull request #398 from kellyjonbrazil/dev

Dev v1.23.2

* Merge pull request #398 from kellyjonbrazil/dev

Dev v1.23.2

---------

Co-authored-by: Kelly Brazil <kellyjonbrazil@gmail.com>
Co-authored-by: Jjack3032 <julian.jackson@parsons.us>

* formatting

* doc update

* use splitlines

* formatting

* formatting

* Parser for `find` linux command (#434)

* Added find parser and tests for Centos 7.7 and Ubuntu 18.04

* Added a test file, changed logic, and included a case for permission denied returned by find.

* Added a few more lines to the tests

* Changed logic for setting values to null and updated test cases.

* doc update

* doc update

* Added proc_net_tcp parser (#421)

Co-authored-by: Kelly Brazil <kellyjonbrazil@gmail.com>

* clean up net_tcp parser

* add resolve.conf test files

* doc update

* add resolve.conf parser

* doc update

* add sortlist functionality

* add resolve.conf parser tests

* doc update

---------

Co-authored-by: Julian5555 <58196809+Julian5555@users.noreply.github.com>
Co-authored-by: Jjack3032 <julian.jackson@parsons.us>
Co-authored-by: solomonleang <124934439+solomonleang@users.noreply.github.com>
Co-authored-by: AlvinSolomon <41175627+AlvinSolomon@users.noreply.github.com>
2023-07-30 10:08:39 -07:00
Kelly Brazil
d58ca402a7 Merge pull request #432 from kellyjonbrazil/revert-430-find_parser
Revert "Added find parser and tests for Centos 7.7 and Ubuntu 18.04"
2023-06-23 15:40:06 +00:00
Kelly Brazil
5386879040 Revert "Added find parser and tests for Centos 7.7 and Ubuntu 18.04 (#430)"
This reverts commit f19a1f23a9.
2023-06-23 08:39:12 -07:00
solomonleang
f19a1f23a9 Added find parser and tests for Centos 7.7 and Ubuntu 18.04 (#430)
* Added find parser and tests for Centos 7.7 and Ubuntu 18.04

* Added a test file, changed logic, and included a case for permission denied returned by find.

* Added a few more lines to the tests

* Changed logic for setting values to null and updated test cases.
2023-06-23 08:38:56 -07:00
Kelly Brazil
5023e5be4c Dev v1.23.3 (#426)
* make certificate search more robust to different line endings

* use license_files instead of license_file which is deprecated

* version bump

* parsing extra options -e, -o, -p

* fix for extra opts and different field length at option -[aeop]

* test integration for extra opts -e -o -p

* formatting and use ast.literal_eval instead of eval

* doc update

* doc update

* Add a parser to parse mounted encrypted veracrypt volumes (fixes #403)

* update compatibility warning message

* netstat windows parser

* tests

* Windows route parser

* tests

* id should be a string

* add veracrypt parser and docs

* formatting

* doc update

* lsattr parser

* Update test_lsattr.py

* changed keys to lowercase

* changed info

* support missing data for stat

* doc update

* doc update

* doc update

* ensure compatibility warning prints even with no data

* improve compatibility message

* add support for dig +nsid option

* New parser: srt (#415)

* srt parser

* changed the parser to support more complex cases

* doc updates

* Adding certificate request parser (#416)

* Adding certificate request parser

* Adding the CSR type for Windows-style CSR

---------

Co-authored-by: Stg22 <stephane.for.test@gmail.com>

* doc update

* add csr tests

* Last -x (#422)

* Refactored the parser

* last -x support

* doc update

* fix for ping on linux with missing hostname

* allow less strict email decoding with a warning.

* doc update

* use explicit ascii decode with backslashreplace

* doc update

* use jc warning function instead of print for warning message

* last -x shutdown fix (#423)

* inject quiet setting into asn1crypto library

* Parse appearance and modalias lines for mouse devices (fixes #419) (#425)

The bluetoothctl device parser is implemented so that it aborts the parsing
process immediately returning what it has collected so far. This is because
the parser should work in hydrid way to support outputs comming from bluetoothctl
devices and bluetoothctl info calls.

* doc update

* doc update

---------

Co-authored-by: gerd <gerd.augstein@gmail.com>
Co-authored-by: Jake Ob <iakopap@gmail.com>
Co-authored-by: Mevaser <mevaser.rotner@gmail.com>
Co-authored-by: M.R <69431152+YeahItsMeAgain@users.noreply.github.com>
Co-authored-by: Stg22 <46686290+Stg22@users.noreply.github.com>
Co-authored-by: Stg22 <stephane.for.test@gmail.com>
2023-06-21 15:48:23 -07:00
345 changed files with 44913 additions and 1875 deletions

View File

@@ -14,12 +14,12 @@ jobs:
strategy:
matrix:
os: [macos-latest, ubuntu-20.04, windows-latest]
python-version: ["3.6", "3.7", "3.8", "3.9", "3.10", "3.11"]
python-version: ["3.6", "3.7", "3.8", "3.9", "3.10", "3.11", "3.12"]
steps:
- uses: actions/checkout@v3
- name: "Set up timezone to America/Los_Angeles"
uses: szenius/set-timezone@v1.0
uses: szenius/set-timezone@v1.2
with:
timezoneLinux: "America/Los_Angeles"
timezoneMacos: "America/Los_Angeles"

1
.gitignore vendored
View File

@@ -6,3 +6,4 @@ build/
.github/
.vscode/
_config.yml
.venv

View File

@@ -1,5 +1,65 @@
jc changelog
20231216 v1.24.0
- Add `debconf-show` command parser
- Add `iftop` command parser
- Add `pkg-index-apk` parser for Alpine Linux Package Index files
- Add `pkg-index-deb` parser for Debian/Ubuntu Package Index files
- Add `proc-cmdline` parser for `/proc/cmdline` file
- Add `swapon` command parser
- Add `tune2fs` command parser
- Remove `iso-datetime` parser deprecated since v1.22.1. (use `datetime-iso` instead)
- Update timezone change in Github Actions for node v16 requirement
- Add Python 3.12 tests to Github Actions
- Refactor `acpi` command parser for code cleanup
- Refactor vendored libraries to remove Python 2 support
- Fix `iptables` parser for cases where the `target` field is blank in a rule
- Fix `vmstat` parsers for some cases where wide output is used
- Fix `mount` parser for cases with spaces in the mount point name
- Fix `xrandr` parser for infinite loop issues
20231023 v1.23.6
- Fix XML parser for xmltodict library versions < 0.13.0
- Fix `who` command parser for cases when the from field contains spaces
20231021 v1.23.5
- Add `host` command parser
- Add `nsd-control` command parser
- Add `lsb_release` command parser
- Add `/etc/os-release` file parser
- Enhance `env` command parser to support multi-line values
- Enhance `ping` and `ping-s` parsers to add error and corrupted support
- Enhance `xml` parser to include comments in the JSON output
- Fix `pidstat` command parser when using `-T ALL`
- Fix `x509-cert` parser to allow negative serial numbers
- Fix `x509-cert` parser for cases when bitstrings are larger than standard
- Fix `xrandr` command parser for associated device issues
- Fix error when pygments library is not installed
20230730 v1.23.4
- Add `/etc/resolve.conf` file parser
- Add `/proc/net/tcp` and `/proc/net/tcp6` file parser
- Add `find` command parser
- Add `ip route` command parser
- Fix `certbot` command parser to be more robust with different line endings
20230621 v1.23.3
- Add `lsattr` command parser
- Add `srt` file parser
- Add `veracrypt` command parser
- Add X509 Certificate Request file parser
- Enhance X509 Certificate parser to allow non-compliant email addresses with a warning
- Enhance `dig` command parser to support the `+nsid` option
- Enhance `last` and `lastb` command parser to support the `-x` option
- Enhance `route` command parser to add Windows support
- Enhnace `netstat` command parser to add Windows support
- Enhance `ss` command parser to support extended options
- Enhance the compatibility warning message
- Fix `bluetoothctl` command parser for some mouse devices
- Fix `ping` command parsers for output with missing hostname
- Fix `stat` command parser for older versions that may not contain all fields
- Fix deprecated option in `setup.cfg`
20230429 v1.23.2
- Add `bluetoothctl` command parser
- Add `certbot` command parser for `certificates` and `show_account` options

View File

@@ -4551,6 +4551,57 @@ cat entrust.pem | jc --x509-cert -p
}
]
```
### X.509 PEM and DER certificate request files
```bash
cat myserver.csr | jc --x509-csr -p
```
```json
[
{
"certification_request_info": {
"version": "v1",
"subject": {
"common_name": "myserver.for.example"
},
"subject_pk_info": {
"algorithm": {
"algorithm": "ec",
"parameters": "secp256r1"
},
"public_key": "04:40:33:c0:91:8f:e9:46:ea:d0:dc:d0:f9:63:2c:a4:35:1f:0f:54:c8:a9:9b:e3:9e:d4:f3:64:b8:60:cc:7f:39:75:dd:a7:61:31:02:7c:9e:89:c6:db:45:15:f2:5f:b0:65:29:0b:42:d2:6e:c2:ea:a6:23:bd:fc:65:e5:7d:4e"
},
"attributes": [
{
"type": "extension_request",
"values": [
[
{
"extn_id": "extended_key_usage",
"critical": false,
"extn_value": [
"server_auth"
]
},
{
"extn_id": "subject_alt_name",
"critical": false,
"extn_value": [
"myserver.for.example"
]
}
]
]
}
]
},
"signature_algorithm": {
"algorithm": "sha384_ecdsa",
"parameters": null
},
"signature": "30:45:02:20:77:ac:5b:51:bf:c5:f5:43:02:52:ae:66:8a:fe:95:98:98:98:a9:45:34:31:08:ff:2c:cc:92:d9:1c:70:28:74:02:21:00:97:79:7b:e7:45:18:76:cf:d7:3b:79:34:56:d2:69:b5:73:41:9b:8a:b7:ad:ec:80:23:c1:2f:64:da:e5:28:19"
}
]
```
### XML files
```bash
cat cd_catalog.xml

View File

@@ -5,11 +5,13 @@
> Try the `jc` [web demo](https://jc-web.onrender.com/) and [REST API](https://github.com/kellyjonbrazil/jc-restapi)
> JC is [now available](https://galaxy.ansible.com/community/general) as an
> `jc` is [now available](https://galaxy.ansible.com/community/general) as an
Ansible filter plugin in the `community.general` collection. See this
[blog post](https://blog.kellybrazil.com/2020/08/30/parsing-command-output-in-ansible-with-jc/)
for an example.
> Looking for something like `jc` but lower-level? Check out [regex2json](https://gitlab.com/tozd/regex2json).
# JC
JSON Convert
@@ -118,6 +120,7 @@ pip3 install jc
| NixOS linux | `nix-env -iA nixpkgs.jc` or `nix-env -iA nixos.jc` |
| Guix System linux | `guix install jc` |
| Gentoo Linux | `emerge dev-python/jc` |
| Photon linux | `tdnf install jc` |
| macOS | `brew install jc` |
| FreeBSD | `portsnap fetch update && cd /usr/ports/textproc/py-jc && make install clean` |
| Ansible filter plugin | `ansible-galaxy collection install community.general` |
@@ -176,6 +179,7 @@ option.
| `--csv-s` | CSV file streaming parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/csv_s) |
| `--date` | `date` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/date) |
| `--datetime-iso` | ISO 8601 Datetime string parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/datetime_iso) |
| `--debconf-show` | `debconf-show` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/debconf_show) |
| `--df` | `df` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/df) |
| `--dig` | `dig` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/dig) |
| `--dir` | `dir` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/dir) |
@@ -185,6 +189,7 @@ option.
| `--email-address` | Email Address string parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/email_address) |
| `--env` | `env` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/env) |
| `--file` | `file` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/file) |
| `--find` | `find` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/find) |
| `--findmnt` | `findmnt` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/findmnt) |
| `--finger` | `finger` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/finger) |
| `--free` | `free` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/free) |
@@ -199,6 +204,7 @@ option.
| `--hashsum` | hashsum command parser (`md5sum`, `shasum`, etc.) | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/hashsum) |
| `--hciconfig` | `hciconfig` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/hciconfig) |
| `--history` | `history` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/history) |
| `--host` | `host` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/host) |
| `--hosts` | `/etc/hosts` file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/hosts) |
| `--id` | `id` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/id) |
| `--ifconfig` | `ifconfig` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/ifconfig) |
@@ -208,6 +214,7 @@ option.
| `--iostat-s` | `iostat` command streaming parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/iostat_s) |
| `--ip-address` | IPv4 and IPv6 Address string parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/ip_address) |
| `--iptables` | `iptables` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/iptables) |
| `--ip-route` | `ip route` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/ip_route) |
| `--iw-scan` | `iw dev [device] scan` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/iw_scan) |
| `--iwconfig` | `iwconfig` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/iwconfig) |
| `--jar-manifest` | Java MANIFEST.MF file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/jar_manifest) |
@@ -217,6 +224,8 @@ option.
| `--last` | `last` and `lastb` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/last) |
| `--ls` | `ls` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/ls) |
| `--ls-s` | `ls` command streaming parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/ls_s) |
| `--lsattr` | `lsattr` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/lsattr) |
| `--lsb-release` | `lsb_release` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/lsb_release) |
| `--lsblk` | `lsblk` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/lsblk) |
| `--lsmod` | `lsmod` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/lsmod) |
| `--lsof` | `lsof` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/lsof) |
@@ -229,9 +238,11 @@ option.
| `--mpstat-s` | `mpstat` command streaming parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/mpstat_s) |
| `--netstat` | `netstat` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/netstat) |
| `--nmcli` | `nmcli` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/nmcli) |
| `--nsd-control` | `nsd-control` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/nsd_control) |
| `--ntpq` | `ntpq -p` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/ntpq) |
| `--openvpn` | openvpn-status.log file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/openvpn) |
| `--os-prober` | `os-prober` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/os_prober) |
| `--os-release` | `/etc/os-release` file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/os_release) |
| `--passwd` | `/etc/passwd` file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/passwd) |
| `--pci-ids` | `pci.ids` file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/pci_ids) |
| `--pgpass` | PostgreSQL password file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/pgpass) |
@@ -241,10 +252,13 @@ option.
| `--ping-s` | `ping` and `ping6` command streaming parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/ping_s) |
| `--pip-list` | `pip list` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/pip_list) |
| `--pip-show` | `pip show` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/pip_show) |
| `--pkg-index-apk` | Alpine Linux Package Index file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/pkg_index_apk) |
| `--pkg-index-deb` | Debian Package Index file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/pkg_index_deb) |
| `--plist` | PLIST file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/plist) |
| `--postconf` | `postconf -M` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/postconf) |
| `--proc` | `/proc/` file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/proc) |
| `--ps` | `ps` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/ps) |
| `--resolve-conf` | `/etc/resolve.conf` file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/resolve_conf) |
| `--route` | `route` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/route) |
| `--rpm-qi` | `rpm -qi` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/rpm_qi) |
| `--rsync` | `rsync` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/rsync) |
@@ -252,11 +266,13 @@ option.
| `--semver` | Semantic Version string parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/semver) |
| `--sfdisk` | `sfdisk` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/sfdisk) |
| `--shadow` | `/etc/shadow` file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/shadow) |
| `--srt` | SRT file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/srt) |
| `--ss` | `ss` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/ss) |
| `--ssh-conf` | `ssh` config file and `ssh -G` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/ssh_conf) |
| `--sshd-conf` | `sshd` config file and `sshd -T` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/sshd_conf) |
| `--stat` | `stat` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/stat) |
| `--stat-s` | `stat` command streaming parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/stat_s) |
| `--swapon` | `swapon` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/swapon) |
| `--sysctl` | `sysctl` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/sysctl) |
| `--syslog` | Syslog RFC 5424 string parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/syslog) |
| `--syslog-s` | Syslog RFC 5424 string streaming parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/syslog_s) |
@@ -275,6 +291,7 @@ option.
| `--top-s` | `top -b` command streaming parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/top_s) |
| `--tracepath` | `tracepath` and `tracepath6` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/tracepath) |
| `--traceroute` | `traceroute` and `traceroute6` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/traceroute) |
| `--tune2fs` | `tune2fs -l` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/tune2fs) |
| `--udevadm` | `udevadm info` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/udevadm) |
| `--ufw` | `ufw status` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/ufw) |
| `--ufw-appinfo` | `ufw app info [application]` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/ufw_appinfo) |
@@ -285,12 +302,14 @@ option.
| `--uptime` | `uptime` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/uptime) |
| `--url` | URL string parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/url) |
| `--ver` | Version string parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/ver) |
| `--veracrypt` | `veracrypt` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/veracrypt) |
| `--vmstat` | `vmstat` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/vmstat) |
| `--vmstat-s` | `vmstat` command streaming parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/vmstat_s) |
| `--w` | `w` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/w) |
| `--wc` | `wc` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/wc) |
| `--who` | `who` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/who) |
| `--x509-cert` | X.509 PEM and DER certificate file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/x509_cert) |
| `--x509-csr` | X.509 PEM and DER certificate request file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/x509_csr) |
| `--xml` | XML file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/xml) |
| `--xrandr` | `xrandr` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/xrandr) |
| `--yaml` | YAML file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/yaml) |

View File

@@ -3,8 +3,8 @@ _jc()
local cur prev words cword jc_commands jc_parsers jc_options \
jc_about_options jc_about_mod_options jc_help_options jc_special_options
jc_commands=(acpi airport arp blkid bluetoothctl cbt certbot chage cksum crontab date df dig dmidecode dpkg du env file findmnt finger free git gpg hciconfig id ifconfig iostat iptables iw iwconfig jobs last lastb ls lsblk lsmod lsof lspci lsusb md5 md5sum mdadm mount mpstat netstat nmcli ntpq os-prober pidstat ping ping6 pip pip3 postconf printenv ps route rpm rsync sfdisk sha1sum sha224sum sha256sum sha384sum sha512sum shasum ss ssh sshd stat sum sysctl systemctl systeminfo timedatectl top tracepath tracepath6 traceroute traceroute6 udevadm ufw uname update-alternatives upower uptime vdir vmstat w wc who xrandr zipinfo zpool)
jc_parsers=(--acpi --airport --airport-s --arp --asciitable --asciitable-m --blkid --bluetoothctl --cbt --cef --cef-s --certbot --chage --cksum --clf --clf-s --crontab --crontab-u --csv --csv-s --date --datetime-iso --df --dig --dir --dmidecode --dpkg-l --du --email-address --env --file --findmnt --finger --free --fstab --git-log --git-log-s --git-ls-remote --gpg --group --gshadow --hash --hashsum --hciconfig --history --hosts --id --ifconfig --ini --ini-dup --iostat --iostat-s --ip-address --iptables --iw-scan --iwconfig --jar-manifest --jobs --jwt --kv --last --ls --ls-s --lsblk --lsmod --lsof --lspci --lsusb --m3u --mdadm --mount --mpstat --mpstat-s --netstat --nmcli --ntpq --openvpn --os-prober --passwd --pci-ids --pgpass --pidstat --pidstat-s --ping --ping-s --pip-list --pip-show --plist --postconf --proc --proc-buddyinfo --proc-consoles --proc-cpuinfo --proc-crypto --proc-devices --proc-diskstats --proc-filesystems --proc-interrupts --proc-iomem --proc-ioports --proc-loadavg --proc-locks --proc-meminfo --proc-modules --proc-mtrr --proc-pagetypeinfo --proc-partitions --proc-slabinfo --proc-softirqs --proc-stat --proc-swaps --proc-uptime --proc-version --proc-vmallocinfo --proc-vmstat --proc-zoneinfo --proc-driver-rtc --proc-net-arp --proc-net-dev --proc-net-dev-mcast --proc-net-if-inet6 --proc-net-igmp --proc-net-igmp6 --proc-net-ipv6-route --proc-net-netlink --proc-net-netstat --proc-net-packet --proc-net-protocols --proc-net-route --proc-net-unix --proc-pid-fdinfo --proc-pid-io --proc-pid-maps --proc-pid-mountinfo --proc-pid-numa-maps --proc-pid-smaps --proc-pid-stat --proc-pid-statm --proc-pid-status --ps --route --rpm-qi --rsync --rsync-s --semver --sfdisk --shadow --ss --ssh-conf --sshd-conf --stat --stat-s --sysctl --syslog --syslog-s --syslog-bsd --syslog-bsd-s --systemctl --systemctl-lj --systemctl-ls --systemctl-luf --systeminfo --time --timedatectl --timestamp --toml --top --top-s --tracepath --traceroute --udevadm --ufw --ufw-appinfo --uname --update-alt-gs --update-alt-q --upower --uptime --url --ver --vmstat --vmstat-s --w --wc --who --x509-cert --xml --xrandr --yaml --zipinfo --zpool-iostat --zpool-status)
jc_commands=(acpi airport arp blkid bluetoothctl cbt certbot chage cksum crontab date debconf-show df dig dmidecode dpkg du env file findmnt finger free git gpg hciconfig host id ifconfig iostat ip iptables iw iwconfig jobs last lastb ls lsattr lsb_release lsblk lsmod lsof lspci lsusb md5 md5sum mdadm mount mpstat netstat nmcli nsd-control ntpq os-prober pidstat ping ping6 pip pip3 postconf printenv ps route rpm rsync sfdisk sha1sum sha224sum sha256sum sha384sum sha512sum shasum ss ssh sshd stat sum swapon sysctl systemctl systeminfo timedatectl top tracepath tracepath6 traceroute traceroute6 tune2fs udevadm ufw uname update-alternatives upower uptime vdir veracrypt vmstat w wc who xrandr zipinfo zpool)
jc_parsers=(--acpi --airport --airport-s --arp --asciitable --asciitable-m --blkid --bluetoothctl --cbt --cef --cef-s --certbot --chage --cksum --clf --clf-s --crontab --crontab-u --csv --csv-s --date --datetime-iso --debconf-show --df --dig --dir --dmidecode --dpkg-l --du --email-address --env --file --find --findmnt --finger --free --fstab --git-log --git-log-s --git-ls-remote --gpg --group --gshadow --hash --hashsum --hciconfig --history --host --hosts --id --ifconfig --ini --ini-dup --iostat --iostat-s --ip-address --iptables --ip-route --iw-scan --iwconfig --jar-manifest --jobs --jwt --kv --last --ls --ls-s --lsattr --lsb-release --lsblk --lsmod --lsof --lspci --lsusb --m3u --mdadm --mount --mpstat --mpstat-s --netstat --nmcli --nsd-control --ntpq --openvpn --os-prober --os-release --passwd --pci-ids --pgpass --pidstat --pidstat-s --ping --ping-s --pip-list --pip-show --pkg-index-apk --pkg-index-deb --plist --postconf --proc --proc-buddyinfo --proc-cmdline --proc-consoles --proc-cpuinfo --proc-crypto --proc-devices --proc-diskstats --proc-filesystems --proc-interrupts --proc-iomem --proc-ioports --proc-loadavg --proc-locks --proc-meminfo --proc-modules --proc-mtrr --proc-pagetypeinfo --proc-partitions --proc-slabinfo --proc-softirqs --proc-stat --proc-swaps --proc-uptime --proc-version --proc-vmallocinfo --proc-vmstat --proc-zoneinfo --proc-driver-rtc --proc-net-arp --proc-net-dev --proc-net-dev-mcast --proc-net-if-inet6 --proc-net-igmp --proc-net-igmp6 --proc-net-ipv6-route --proc-net-netlink --proc-net-netstat --proc-net-packet --proc-net-protocols --proc-net-route --proc-net-tcp --proc-net-unix --proc-pid-fdinfo --proc-pid-io --proc-pid-maps --proc-pid-mountinfo --proc-pid-numa-maps --proc-pid-smaps --proc-pid-stat --proc-pid-statm --proc-pid-status --ps --resolve-conf --route --rpm-qi --rsync --rsync-s --semver --sfdisk --shadow --srt --ss --ssh-conf --sshd-conf --stat --stat-s --swapon --sysctl --syslog --syslog-s --syslog-bsd --syslog-bsd-s --systemctl --systemctl-lj --systemctl-ls --systemctl-luf --systeminfo --time --timedatectl --timestamp --toml --top --top-s --tracepath --traceroute --tune2fs --udevadm --ufw --ufw-appinfo --uname --update-alt-gs --update-alt-q --upower --uptime --url --ver --veracrypt --vmstat --vmstat-s --w --wc --who --x509-cert --x509-csr --xml --xrandr --yaml --zipinfo --zpool-iostat --zpool-status)
jc_options=(--force-color -C --debug -d --monochrome -m --meta-out -M --pretty -p --quiet -q --raw -r --unbuffer -u --yaml-out -y)
jc_about_options=(--about -a)
jc_about_mod_options=(--pretty -p --yaml-out -y --monochrome -m --force-color -C)

View File

@@ -9,7 +9,7 @@ _jc() {
jc_help_options jc_help_options_describe \
jc_special_options jc_special_options_describe
jc_commands=(acpi airport arp blkid bluetoothctl cbt certbot chage cksum crontab date df dig dmidecode dpkg du env file findmnt finger free git gpg hciconfig id ifconfig iostat iptables iw iwconfig jobs last lastb ls lsblk lsmod lsof lspci lsusb md5 md5sum mdadm mount mpstat netstat nmcli ntpq os-prober pidstat ping ping6 pip pip3 postconf printenv ps route rpm rsync sfdisk sha1sum sha224sum sha256sum sha384sum sha512sum shasum ss ssh sshd stat sum sysctl systemctl systeminfo timedatectl top tracepath tracepath6 traceroute traceroute6 udevadm ufw uname update-alternatives upower uptime vdir vmstat w wc who xrandr zipinfo zpool)
jc_commands=(acpi airport arp blkid bluetoothctl cbt certbot chage cksum crontab date debconf-show df dig dmidecode dpkg du env file findmnt finger free git gpg hciconfig host id ifconfig iostat ip iptables iw iwconfig jobs last lastb ls lsattr lsb_release lsblk lsmod lsof lspci lsusb md5 md5sum mdadm mount mpstat netstat nmcli nsd-control ntpq os-prober pidstat ping ping6 pip pip3 postconf printenv ps route rpm rsync sfdisk sha1sum sha224sum sha256sum sha384sum sha512sum shasum ss ssh sshd stat sum swapon sysctl systemctl systeminfo timedatectl top tracepath tracepath6 traceroute traceroute6 tune2fs udevadm ufw uname update-alternatives upower uptime vdir veracrypt vmstat w wc who xrandr zipinfo zpool)
jc_commands_describe=(
'acpi:run "acpi" command with magic syntax.'
'airport:run "airport" command with magic syntax.'
@@ -22,6 +22,7 @@ _jc() {
'cksum:run "cksum" command with magic syntax.'
'crontab:run "crontab" command with magic syntax.'
'date:run "date" command with magic syntax.'
'debconf-show:run "debconf-show" command with magic syntax.'
'df:run "df" command with magic syntax.'
'dig:run "dig" command with magic syntax.'
'dmidecode:run "dmidecode" command with magic syntax.'
@@ -35,9 +36,11 @@ _jc() {
'git:run "git" command with magic syntax.'
'gpg:run "gpg" command with magic syntax.'
'hciconfig:run "hciconfig" command with magic syntax.'
'host:run "host" command with magic syntax.'
'id:run "id" command with magic syntax.'
'ifconfig:run "ifconfig" command with magic syntax.'
'iostat:run "iostat" command with magic syntax.'
'ip:run "ip" command with magic syntax.'
'iptables:run "iptables" command with magic syntax.'
'iw:run "iw" command with magic syntax.'
'iwconfig:run "iwconfig" command with magic syntax.'
@@ -45,6 +48,8 @@ _jc() {
'last:run "last" command with magic syntax.'
'lastb:run "lastb" command with magic syntax.'
'ls:run "ls" command with magic syntax.'
'lsattr:run "lsattr" command with magic syntax.'
'lsb_release:run "lsb_release" command with magic syntax.'
'lsblk:run "lsblk" command with magic syntax.'
'lsmod:run "lsmod" command with magic syntax.'
'lsof:run "lsof" command with magic syntax.'
@@ -57,6 +62,7 @@ _jc() {
'mpstat:run "mpstat" command with magic syntax.'
'netstat:run "netstat" command with magic syntax.'
'nmcli:run "nmcli" command with magic syntax.'
'nsd-control:run "nsd-control" command with magic syntax.'
'ntpq:run "ntpq" command with magic syntax.'
'os-prober:run "os-prober" command with magic syntax.'
'pidstat:run "pidstat" command with magic syntax.'
@@ -82,6 +88,7 @@ _jc() {
'sshd:run "sshd" command with magic syntax.'
'stat:run "stat" command with magic syntax.'
'sum:run "sum" command with magic syntax.'
'swapon:run "swapon" command with magic syntax.'
'sysctl:run "sysctl" command with magic syntax.'
'systemctl:run "systemctl" command with magic syntax.'
'systeminfo:run "systeminfo" command with magic syntax.'
@@ -91,6 +98,7 @@ _jc() {
'tracepath6:run "tracepath6" command with magic syntax.'
'traceroute:run "traceroute" command with magic syntax.'
'traceroute6:run "traceroute6" command with magic syntax.'
'tune2fs:run "tune2fs" command with magic syntax.'
'udevadm:run "udevadm" command with magic syntax.'
'ufw:run "ufw" command with magic syntax.'
'uname:run "uname" command with magic syntax.'
@@ -98,6 +106,7 @@ _jc() {
'upower:run "upower" command with magic syntax.'
'uptime:run "uptime" command with magic syntax.'
'vdir:run "vdir" command with magic syntax.'
'veracrypt:run "veracrypt" command with magic syntax.'
'vmstat:run "vmstat" command with magic syntax.'
'w:run "w" command with magic syntax.'
'wc:run "wc" command with magic syntax.'
@@ -106,7 +115,7 @@ _jc() {
'zipinfo:run "zipinfo" command with magic syntax.'
'zpool:run "zpool" command with magic syntax.'
)
jc_parsers=(--acpi --airport --airport-s --arp --asciitable --asciitable-m --blkid --bluetoothctl --cbt --cef --cef-s --certbot --chage --cksum --clf --clf-s --crontab --crontab-u --csv --csv-s --date --datetime-iso --df --dig --dir --dmidecode --dpkg-l --du --email-address --env --file --findmnt --finger --free --fstab --git-log --git-log-s --git-ls-remote --gpg --group --gshadow --hash --hashsum --hciconfig --history --hosts --id --ifconfig --ini --ini-dup --iostat --iostat-s --ip-address --iptables --iw-scan --iwconfig --jar-manifest --jobs --jwt --kv --last --ls --ls-s --lsblk --lsmod --lsof --lspci --lsusb --m3u --mdadm --mount --mpstat --mpstat-s --netstat --nmcli --ntpq --openvpn --os-prober --passwd --pci-ids --pgpass --pidstat --pidstat-s --ping --ping-s --pip-list --pip-show --plist --postconf --proc --proc-buddyinfo --proc-consoles --proc-cpuinfo --proc-crypto --proc-devices --proc-diskstats --proc-filesystems --proc-interrupts --proc-iomem --proc-ioports --proc-loadavg --proc-locks --proc-meminfo --proc-modules --proc-mtrr --proc-pagetypeinfo --proc-partitions --proc-slabinfo --proc-softirqs --proc-stat --proc-swaps --proc-uptime --proc-version --proc-vmallocinfo --proc-vmstat --proc-zoneinfo --proc-driver-rtc --proc-net-arp --proc-net-dev --proc-net-dev-mcast --proc-net-if-inet6 --proc-net-igmp --proc-net-igmp6 --proc-net-ipv6-route --proc-net-netlink --proc-net-netstat --proc-net-packet --proc-net-protocols --proc-net-route --proc-net-unix --proc-pid-fdinfo --proc-pid-io --proc-pid-maps --proc-pid-mountinfo --proc-pid-numa-maps --proc-pid-smaps --proc-pid-stat --proc-pid-statm --proc-pid-status --ps --route --rpm-qi --rsync --rsync-s --semver --sfdisk --shadow --ss --ssh-conf --sshd-conf --stat --stat-s --sysctl --syslog --syslog-s --syslog-bsd --syslog-bsd-s --systemctl --systemctl-lj --systemctl-ls --systemctl-luf --systeminfo --time --timedatectl --timestamp --toml --top --top-s --tracepath --traceroute --udevadm --ufw --ufw-appinfo --uname --update-alt-gs --update-alt-q --upower --uptime --url --ver --vmstat --vmstat-s --w --wc --who --x509-cert --xml --xrandr --yaml --zipinfo --zpool-iostat --zpool-status)
jc_parsers=(--acpi --airport --airport-s --arp --asciitable --asciitable-m --blkid --bluetoothctl --cbt --cef --cef-s --certbot --chage --cksum --clf --clf-s --crontab --crontab-u --csv --csv-s --date --datetime-iso --debconf-show --df --dig --dir --dmidecode --dpkg-l --du --email-address --env --file --find --findmnt --finger --free --fstab --git-log --git-log-s --git-ls-remote --gpg --group --gshadow --hash --hashsum --hciconfig --history --host --hosts --id --ifconfig --ini --ini-dup --iostat --iostat-s --ip-address --iptables --ip-route --iw-scan --iwconfig --jar-manifest --jobs --jwt --kv --last --ls --ls-s --lsattr --lsb-release --lsblk --lsmod --lsof --lspci --lsusb --m3u --mdadm --mount --mpstat --mpstat-s --netstat --nmcli --nsd-control --ntpq --openvpn --os-prober --os-release --passwd --pci-ids --pgpass --pidstat --pidstat-s --ping --ping-s --pip-list --pip-show --pkg-index-apk --pkg-index-deb --plist --postconf --proc --proc-buddyinfo --proc-cmdline --proc-consoles --proc-cpuinfo --proc-crypto --proc-devices --proc-diskstats --proc-filesystems --proc-interrupts --proc-iomem --proc-ioports --proc-loadavg --proc-locks --proc-meminfo --proc-modules --proc-mtrr --proc-pagetypeinfo --proc-partitions --proc-slabinfo --proc-softirqs --proc-stat --proc-swaps --proc-uptime --proc-version --proc-vmallocinfo --proc-vmstat --proc-zoneinfo --proc-driver-rtc --proc-net-arp --proc-net-dev --proc-net-dev-mcast --proc-net-if-inet6 --proc-net-igmp --proc-net-igmp6 --proc-net-ipv6-route --proc-net-netlink --proc-net-netstat --proc-net-packet --proc-net-protocols --proc-net-route --proc-net-tcp --proc-net-unix --proc-pid-fdinfo --proc-pid-io --proc-pid-maps --proc-pid-mountinfo --proc-pid-numa-maps --proc-pid-smaps --proc-pid-stat --proc-pid-statm --proc-pid-status --ps --resolve-conf --route --rpm-qi --rsync --rsync-s --semver --sfdisk --shadow --srt --ss --ssh-conf --sshd-conf --stat --stat-s --swapon --sysctl --syslog --syslog-s --syslog-bsd --syslog-bsd-s --systemctl --systemctl-lj --systemctl-ls --systemctl-luf --systeminfo --time --timedatectl --timestamp --toml --top --top-s --tracepath --traceroute --tune2fs --udevadm --ufw --ufw-appinfo --uname --update-alt-gs --update-alt-q --upower --uptime --url --ver --veracrypt --vmstat --vmstat-s --w --wc --who --x509-cert --x509-csr --xml --xrandr --yaml --zipinfo --zpool-iostat --zpool-status)
jc_parsers_describe=(
'--acpi:`acpi` command parser'
'--airport:`airport -I` command parser'
@@ -130,6 +139,7 @@ _jc() {
'--csv-s:CSV file streaming parser'
'--date:`date` command parser'
'--datetime-iso:ISO 8601 Datetime string parser'
'--debconf-show:`debconf-show` command parser'
'--df:`df` command parser'
'--dig:`dig` command parser'
'--dir:`dir` command parser'
@@ -139,6 +149,7 @@ _jc() {
'--email-address:Email Address string parser'
'--env:`env` command parser'
'--file:`file` command parser'
'--find:`find` command parser'
'--findmnt:`findmnt` command parser'
'--finger:`finger` command parser'
'--free:`free` command parser'
@@ -153,6 +164,7 @@ _jc() {
'--hashsum:hashsum command parser (`md5sum`, `shasum`, etc.)'
'--hciconfig:`hciconfig` command parser'
'--history:`history` command parser'
'--host:`host` command parser'
'--hosts:`/etc/hosts` file parser'
'--id:`id` command parser'
'--ifconfig:`ifconfig` command parser'
@@ -162,6 +174,7 @@ _jc() {
'--iostat-s:`iostat` command streaming parser'
'--ip-address:IPv4 and IPv6 Address string parser'
'--iptables:`iptables` command parser'
'--ip-route:`ip route` command parser'
'--iw-scan:`iw dev [device] scan` command parser'
'--iwconfig:`iwconfig` command parser'
'--jar-manifest:Java MANIFEST.MF file parser'
@@ -171,6 +184,8 @@ _jc() {
'--last:`last` and `lastb` command parser'
'--ls:`ls` command parser'
'--ls-s:`ls` command streaming parser'
'--lsattr:`lsattr` command parser'
'--lsb-release:`lsb_release` command parser'
'--lsblk:`lsblk` command parser'
'--lsmod:`lsmod` command parser'
'--lsof:`lsof` command parser'
@@ -183,9 +198,11 @@ _jc() {
'--mpstat-s:`mpstat` command streaming parser'
'--netstat:`netstat` command parser'
'--nmcli:`nmcli` command parser'
'--nsd-control:`nsd-control` command parser'
'--ntpq:`ntpq -p` command parser'
'--openvpn:openvpn-status.log file parser'
'--os-prober:`os-prober` command parser'
'--os-release:`/etc/os-release` file parser'
'--passwd:`/etc/passwd` file parser'
'--pci-ids:`pci.ids` file parser'
'--pgpass:PostgreSQL password file parser'
@@ -195,10 +212,13 @@ _jc() {
'--ping-s:`ping` and `ping6` command streaming parser'
'--pip-list:`pip list` command parser'
'--pip-show:`pip show` command parser'
'--pkg-index-apk:Alpine Linux Package Index file parser'
'--pkg-index-deb:Debian Package Index file parser'
'--plist:PLIST file parser'
'--postconf:`postconf -M` command parser'
'--proc:`/proc/` file parser'
'--proc-buddyinfo:`/proc/buddyinfo` file parser'
'--proc-cmdline:`/proc/cmdline` file parser'
'--proc-consoles:`/proc/consoles` file parser'
'--proc-cpuinfo:`/proc/cpuinfo` file parser'
'--proc-crypto:`/proc/crypto` file parser'
@@ -237,6 +257,7 @@ _jc() {
'--proc-net-packet:`/proc/net/packet` file parser'
'--proc-net-protocols:`/proc/net/protocols` file parser'
'--proc-net-route:`/proc/net/route` file parser'
'--proc-net-tcp:`/proc/net/tcp` and `/proc/net/tcp6` file parser'
'--proc-net-unix:`/proc/net/unix` file parser'
'--proc-pid-fdinfo:`/proc/<pid>/fdinfo/<fd>` file parser'
'--proc-pid-io:`/proc/<pid>/io` file parser'
@@ -248,6 +269,7 @@ _jc() {
'--proc-pid-statm:`/proc/<pid>/statm` file parser'
'--proc-pid-status:`/proc/<pid>/status` file parser'
'--ps:`ps` command parser'
'--resolve-conf:`/etc/resolve.conf` file parser'
'--route:`route` command parser'
'--rpm-qi:`rpm -qi` command parser'
'--rsync:`rsync` command parser'
@@ -255,11 +277,13 @@ _jc() {
'--semver:Semantic Version string parser'
'--sfdisk:`sfdisk` command parser'
'--shadow:`/etc/shadow` file parser'
'--srt:SRT file parser'
'--ss:`ss` command parser'
'--ssh-conf:`ssh` config file and `ssh -G` command parser'
'--sshd-conf:`sshd` config file and `sshd -T` command parser'
'--stat:`stat` command parser'
'--stat-s:`stat` command streaming parser'
'--swapon:`swapon` command parser'
'--sysctl:`sysctl` command parser'
'--syslog:Syslog RFC 5424 string parser'
'--syslog-s:Syslog RFC 5424 string streaming parser'
@@ -278,6 +302,7 @@ _jc() {
'--top-s:`top -b` command streaming parser'
'--tracepath:`tracepath` and `tracepath6` command parser'
'--traceroute:`traceroute` and `traceroute6` command parser'
'--tune2fs:`tune2fs -l` command parser'
'--udevadm:`udevadm info` command parser'
'--ufw:`ufw status` command parser'
'--ufw-appinfo:`ufw app info [application]` command parser'
@@ -288,12 +313,14 @@ _jc() {
'--uptime:`uptime` command parser'
'--url:URL string parser'
'--ver:Version string parser'
'--veracrypt:`veracrypt` command parser'
'--vmstat:`vmstat` command parser'
'--vmstat-s:`vmstat` command streaming parser'
'--w:`w` command parser'
'--wc:`wc` command parser'
'--who:`who` command parser'
'--x509-cert:X.509 PEM and DER certificate file parser'
'--x509-csr:X.509 PEM and DER certificate request file parser'
'--xml:XML file parser'
'--xrandr:`xrandr` command parser'
'--yaml:YAML file parser'

View File

@@ -250,4 +250,4 @@ Returns:
### Parser Information
Compatibility: linux
Version 1.6 by Kelly Brazil (kellyjonbrazil@gmail.com)
Version 1.7 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -36,6 +36,7 @@ a controller and a device but there might be fields corresponding to one entity.
"name": string,
"is_default": boolean,
"is_public": boolean,
"is_random": boolean,
"address": string,
"alias": string,
"class": string,
@@ -54,8 +55,10 @@ a controller and a device but there might be fields corresponding to one entity.
{
"name": string,
"is_public": boolean,
"is_random": boolean,
"address": string,
"alias": string,
"appearance": string,
"class": string,
"icon": string,
"paired": string,
@@ -66,7 +69,8 @@ a controller and a device but there might be fields corresponding to one entity.
"legacy_pairing": string,
"rssi": int,
"txpower": int,
"uuids": array
"uuids": array,
"modalias": string
}
]
@@ -126,4 +130,4 @@ Returns:
### Parser Information
Compatibility: linux
Version 1.0 by Jake Ob (iakopap at gmail.com)
Version 1.1 by Jake Ob (iakopap at gmail.com)

View File

@@ -158,4 +158,4 @@ Returns:
### Parser Information
Compatibility: linux, darwin, cygwin, win32, aix, freebsd
Version 1.0 by Kelly Brazil (kellyjonbrazil@gmail.com)
Version 1.2 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -0,0 +1,105 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.debconf_show"></a>
# jc.parsers.debconf\_show
jc - JSON Convert `debconf-show` command output parser
Usage (cli):
$ debconf-show onlyoffice-documentserver | jc --debconf-show
or
$ jc debconf-show onlyoffice-documentserver
Usage (module):
import jc
result = jc.parse('debconf_show', debconf_show_command_output)
Schema:
[
{
"asked": boolean,
"packagename": string,
"name": string,
"value": string
}
]
Examples:
$ debconf-show onlyoffice-documentserver | jc --debconf-show -p
[
{
"asked": true,
"packagename": "onlyoffice",
"name": "jwt_secret",
"value": "aL8ei2iereuzee7cuJ6Cahjah1ixee2ah"
},
{
"asked": false,
"packagename": "onlyoffice",
"name": "db_pwd",
"value": "(password omitted)"
},
{
"asked": true,
"packagename": "onlyoffice",
"name": "rabbitmq_pwd",
"value": "(password omitted)"
},
{
"asked": true,
"packagename": "onlyoffice",
"name": "db_port",
"value": "5432"
},
{
"asked": true,
"packagename": "onlyoffice",
"name": "db_user",
"value": "onlyoffice"
},
{
"asked": true,
"packagename": "onlyoffice",
"name": "rabbitmq_proto",
"value": "amqp"
},
{
"asked": true,
"packagename": "onlyoffice",
"name": "cluster_mode",
"value": "false"
}
]
<a id="jc.parsers.debconf_show.parse"></a>
### parse
```python
def parse(data: str,
raw: bool = False,
quiet: bool = False) -> List[JSONDictType]
```
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of Dictionaries. Raw or processed structured data.
### Parser Information
Compatibility: linux
Version 1.0 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -9,6 +9,7 @@ Options supported:
- `+noall +answer` options are supported in cases where only the answer
information is desired.
- `+axfr` option is supported on its own
- `+nsid` option is supported
The `when_epoch` calculated timestamp field is naive. (i.e. based on the
local time of the system the parser is run on)
@@ -345,4 +346,4 @@ Returns:
### Parser Information
Compatibility: linux, aix, freebsd, darwin, win32, cygwin
Version 2.4 by Kelly Brazil (kellyjonbrazil@gmail.com)
Version 2.5 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -90,10 +90,10 @@ Parameters:
Returns:
Dictionary of raw structured data or
List of Dictionaries of processed structured data
Dictionary of raw structured data or (default)
List of Dictionaries of processed structured data (raw)
### Parser Information
Compatibility: linux, darwin, cygwin, win32, aix, freebsd
Version 1.4 by Kelly Brazil (kellyjonbrazil@gmail.com)
Version 1.5 by Kelly Brazil (kellyjonbrazil@gmail.com)

82
docs/parsers/find.md Normal file
View File

@@ -0,0 +1,82 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.find"></a>
# jc.parsers.find
jc - JSON Convert `find` command output parser
This parser returns a list of objects by default and a list of strings if
the `--raw` option is used.
Usage (cli):
$ find | jc --find
Usage (module):
import jc
result = jc.parse('find', find_command_output)
Schema:
[
{
"path": string,
"node": string,
"error": string
}
]
Examples:
$ find | jc --find -p
[
{
"path": "./directory"
"node": "filename"
},
{
"path": "./anotherdirectory"
"node": "anotherfile"
},
{
"path": null
"node": null
"error": "find: './inaccessible': Permission denied"
}
...
]
$ find | jc --find -p -r
[
"./templates/readme_template",
"./templates/manpage_template",
"./.github/workflows/pythonapp.yml",
...
]
<a id="jc.parsers.find.parse"></a>
### parse
```python
def parse(data, raw=False, quiet=False)
```
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of raw strings or
List of Dictionaries of processed structured data
### Parser Information
Compatibility: linux
Version 1.0 by Solomon Leang (solomonleang@gmail.com)

113
docs/parsers/host.md Normal file
View File

@@ -0,0 +1,113 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.host"></a>
# jc.parsers.host
jc - JSON Convert `host` command output parser
Supports parsing of the most commonly used RR types (A, AAAA, MX, TXT)
Usage (cli):
$ host google.com | jc --host
or
$ jc host google.com
Usage (module):
import jc
result = jc.parse('host', host_command_output)
Schema:
[
{
"hostname": string,
"address": [
string
],
"v6-address": [
string
],
"mail": [
string
]
}
]
[
{
"nameserver": string,
"zone": string,
"mname": string,
"rname": string,
"serial": integer,
"refresh": integer,
"retry": integer,
"expire": integer,
"minimum": integer
}
]
Examples:
$ host google.com | jc --host
[
{
"hostname": "google.com",
"address": [
"142.251.39.110"
],
"v6-address": [
"2a00:1450:400e:811::200e"
],
"mail": [
"smtp.google.com."
]
}
]
$ jc host -C sunet.se
[
{
"nameserver": "2001:6b0:7::2",
"zone": "sunet.se",
"mname": "sunic.sunet.se.",
"rname": "hostmaster.sunet.se.",
"serial": "2023090401",
"refresh": "28800",
"retry": "7200",
"expire": "604800",
"minimum": "300"
},
{
...
}
]
<a id="jc.parsers.host.parse"></a>
### parse
```python
def parse(data: str, raw: bool = False, quiet: bool = False)
```
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of Dictionaries. Raw or processed structured data.
### Parser Information
Compatibility: linux, darwin, cygwin, win32, aix, freebsd
Version 1.0 by Pettai (pettai@sunet.se)

View File

@@ -22,12 +22,12 @@ contained in lists/arrays.
Usage (cli):
$ cat foo.ini | jc --ini
$ cat foo.ini | jc --ini-dup
Usage (module):
import jc
result = jc.parse('ini', ini_file_output)
result = jc.parse('ini_dup', ini_file_output)
Schema:
@@ -67,7 +67,7 @@ Examples:
fruit = peach
color = green
$ cat example.ini | jc --ini -p
$ cat example.ini | jc --ini-dup -p
{
"foo": [
"fiz"
@@ -118,4 +118,4 @@ Returns:
### Parser Information
Compatibility: linux, darwin, cygwin, win32, aix, freebsd
Version 1.0 by Kelly Brazil (kellyjonbrazil@gmail.com)
Version 1.1 by Kelly Brazil (kellyjonbrazil@gmail.com)

74
docs/parsers/ip_route.md Normal file
View File

@@ -0,0 +1,74 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.ip_route"></a>
# jc.parsers.ip\_route
jc - JSON Convert `ip route` command output parser
Usage (cli):
$ ip route | jc --ip-route
or
$ jc ip-route
Usage (module):
import jc
result = jc.parse('ip_route', ip_route_command_output)
Schema:
[
{
"ip": string,
"via": string,
"dev": string,
"metric": integer,
"proto": string,
"scope": string,
"src": string,
"via": string,
"status": string
}
]
Examples:
$ ip route | jc --ip-route -p
[
{
"ip": "10.0.2.0/24",
"dev": "enp0s3",
"proto": "kernel",
"scope": "link",
"src": "10.0.2.15",
"metric": 100
}
]
<a id="jc.parsers.ip_route.parse"></a>
### parse
```python
def parse(data, raw=False, quiet=False)
```
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of Json objects if data is processed and Raw data if raw = true.
### Parser Information
Compatibility: linux
Version 1.0 by Julian Jackson (jackson.julian55@yahoo.com)

View File

@@ -30,7 +30,7 @@ Schema:
"num" integer,
"pkts": integer,
"bytes": integer, # converted based on suffix
"target": string,
"target": string, # Null if blank
"prot": string,
"opt": string, # "--" = Null
"in": string,
@@ -186,4 +186,4 @@ Returns:
### Parser Information
Compatibility: linux
Version 1.8 by Kelly Brazil (kellyjonbrazil@gmail.com)
Version 1.9 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -1,37 +0,0 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.iso_datetime"></a>
# jc.parsers.iso\_datetime
jc - JSON Convert ISO 8601 Datetime string parser
This parser has been renamed to datetime-iso (cli) or datetime_iso (module).
This parser will be removed in a future version, so please start using
the new parser name.
<a id="jc.parsers.iso_datetime.parse"></a>
### parse
```python
def parse(data, raw=False, quiet=False)
```
This parser is deprecated and calls datetime_iso. Please use datetime_iso
directly. This parser will be removed in the future.
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
Dictionary. Raw or processed structured data.
### Parser Information
Compatibility: linux, aix, freebsd, darwin, win32, cygwin
Version 1.1 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -5,7 +5,7 @@
jc - JSON Convert `last` and `lastb` command output parser
Supports `-w` and `-F` options.
Supports `-w`, `-F`, and `-x` options.
Calculated epoch time fields are naive (i.e. based on the local time of the
system the parser is run on) since there is no timezone information in the
@@ -127,4 +127,4 @@ Returns:
### Parser Information
Compatibility: linux, darwin, aix, freebsd
Version 1.8 by Kelly Brazil (kellyjonbrazil@gmail.com)
Version 1.9 by Kelly Brazil (kellyjonbrazil@gmail.com)

89
docs/parsers/lsattr.md Normal file
View File

@@ -0,0 +1,89 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.lsattr"></a>
# jc.parsers.lsattr
jc - JSON Convert `lsattr` command output parser
Usage (cli):
$ lsattr | jc --lsattr
or
$ jc lsattr
Usage (module):
import jc
result = jc.parse('lsattr', lsattr_command_output)
Schema:
Information from https://github.com/mirror/busybox/blob/2d4a3d9e6c1493a9520b907e07a41aca90cdfd94/e2fsprogs/e2fs_lib.c#L40
used to define field names
[
{
"file": string,
"compressed_file": Optional[boolean],
"compressed_dirty_file": Optional[boolean],
"compression_raw_access": Optional[boolean],
"secure_deletion": Optional[boolean],
"undelete": Optional[boolean],
"synchronous_updates": Optional[boolean],
"synchronous_directory_updates": Optional[boolean],
"immutable": Optional[boolean],
"append_only": Optional[boolean],
"no_dump": Optional[boolean],
"no_atime": Optional[boolean],
"compression_requested": Optional[boolean],
"encrypted": Optional[boolean],
"journaled_data": Optional[boolean],
"indexed_directory": Optional[boolean],
"no_tailmerging": Optional[boolean],
"top_of_directory_hierarchies": Optional[boolean],
"extents": Optional[boolean],
"no_cow": Optional[boolean],
"casefold": Optional[boolean],
"inline_data": Optional[boolean],
"project_hierarchy": Optional[boolean],
"verity": Optional[boolean],
}
]
Examples:
$ sudo lsattr /etc/passwd | jc --lsattr
[
{
"file": "/etc/passwd",
"extents": true
}
]
<a id="jc.parsers.lsattr.parse"></a>
### parse
```python
def parse(data: str,
raw: bool = False,
quiet: bool = False) -> List[JSONDictType]
```
Main text parsing function
Parameters:
data: (string) text data to parse
quiet: (boolean) suppress warning messages if True
Returns:
List of Dictionaries. Raw or processed structured data.
### Parser Information
Compatibility: linux
Version 1.0 by Mark Rotner (rotner.mr@gmail.com)

View File

@@ -0,0 +1,61 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.lsb_release"></a>
# jc.parsers.lsb\_release
jc - JSON Convert `lsb_release` command parser
This parser is an alias to the Key/Value parser (`--kv`).
Usage (cli):
$ lsb_release -a | jc --lsb-release
or
$ jc lsb_release -a
Usage (module):
import jc
result = jc.parse('lsb_release', lsb_release_command_output)
Schema:
{
"<key>": string
}
Examples:
$ lsb_release -a | jc --lsb-release -p
{
"Distributor ID": "Ubuntu",
"Description": "Ubuntu 16.04.6 LTS",
"Release": "16.04",
"Codename": "xenial"
}
<a id="jc.parsers.lsb_release.parse"></a>
### parse
```python
def parse(data: str, raw: bool = False, quiet: bool = False) -> JSONDictType
```
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
Dictionary. Raw or processed structured data.
### Parser Information
Compatibility: linux, darwin, cygwin, win32, aix, freebsd
Version 1.0 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -98,4 +98,4 @@ Returns:
### Parser Information
Compatibility: linux, darwin, freebsd, aix
Version 1.8 by Kelly Brazil (kellyjonbrazil@gmail.com)
Version 1.9 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -376,6 +376,6 @@ Returns:
List of Dictionaries. Raw or processed structured data.
### Parser Information
Compatibility: linux, darwin, freebsd
Compatibility: linux, darwin, freebsd, win32
Version 1.13 by Kelly Brazil (kellyjonbrazil@gmail.com)
Version 1.15 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -0,0 +1,90 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.nsd_control"></a>
# jc.parsers.nsd\_control
jc - JSON Convert `nsd-control` command output parser
Usage (cli):
$ nsd-control | jc --nsd-control
or
$ jc nsd-control
Usage (module):
import jc
result = jc.parse('nsd_control', nsd_control_command_output)
Schema:
[
{
"version": string,
"verbosity": integer,
"ratelimit": integer
}
]
[
{
"zone": string
"status": {
"state": string,
"served-serial": string,
"commit-serial": string,
"wait": string
}
}
]
Examples:
$ nsd-control | jc --nsd-control status
[
{
"version": "4.6.2",
"verbosity": "2",
"ratelimit": "0"
}
]
$ nsd-control | jc --nsd-control zonestatus sunet.se
[
{
"zone": "sunet.se",
"status": {
"state": "ok",
"served-serial": "2023090704 since 2023-09-07T16:34:27",
"commit-serial": "2023090704 since 2023-09-07T16:34:27",
"wait": "28684 sec between attempts"
}
}
]
<a id="jc.parsers.nsd_control.parse"></a>
### parse
```python
def parse(data: str, raw: bool = False, quiet: bool = False)
```
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of Dictionaries. Raw or processed structured data.
### Parser Information
Compatibility: linux, darwin, cygwin, win32, aix, freebsd
Version 1.0 by Pettai (pettai@sunet.se)

View File

@@ -0,0 +1,86 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.os_release"></a>
# jc.parsers.os\_release
jc - JSON Convert `/etc/os-release` file parser
This parser is an alias to the Key/Value parser (`--kv`).
Usage (cli):
$ cat /etc/os-release | jc --os-release
Usage (module):
import jc
result = jc.parse('os_release', os_release_output)
Schema:
{
"<key>": string
}
Examples:
$ cat /etc/os-release | jc --os-release -p
{
"NAME": "CentOS Linux",
"VERSION": "7 (Core)",
"ID": "centos",
"ID_LIKE": "rhel fedora",
"VERSION_ID": "7",
"PRETTY_NAME": "CentOS Linux 7 (Core)",
"ANSI_COLOR": "0;31",
"CPE_NAME": "cpe:/o:centos:centos:7",
"HOME_URL": "https://www.centos.org/",
"BUG_REPORT_URL": "https://bugs.centos.org/",
"CENTOS_MANTISBT_PROJECT": "CentOS-7",
"CENTOS_MANTISBT_PROJECT_VERSION": "7",
"REDHAT_SUPPORT_PRODUCT": "centos",
"REDHAT_SUPPORT_PRODUCT_VERSION": "7"
}
$ cat /etc/os-release | jc --os-release -p -r
{
"NAME": "\\"CentOS Linux\\"",
"VERSION": "\\"7 (Core)\\"",
"ID": "\\"centos\\"",
"ID_LIKE": "\\"rhel fedora\\"",
"VERSION_ID": "\\"7\\"",
"PRETTY_NAME": "\\"CentOS Linux 7 (Core)\\"",
"ANSI_COLOR": "\\"0;31\\"",
"CPE_NAME": "\\"cpe:/o:centos:centos:7\\"",
"HOME_URL": "\\"https://www.centos.org/\\"",
"BUG_REPORT_URL": "\\"https://bugs.centos.org/\\"",
"CENTOS_MANTISBT_PROJECT": "\\"CentOS-7\\"",
"CENTOS_MANTISBT_PROJECT_VERSION": "\\"7\\"",
"REDHAT_SUPPORT_PRODUCT": "\\"centos\\"",
"REDHAT_SUPPORT_PRODUCT_VERSION": "\\"7\\""
}
<a id="jc.parsers.os_release.parse"></a>
### parse
```python
def parse(data: str, raw: bool = False, quiet: bool = False) -> JSONDictType
```
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
Dictionary. Raw or processed structured data.
### Parser Information
Compatibility: linux, darwin, cygwin, win32, aix, freebsd
Version 1.0 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -45,6 +45,9 @@ Schema:
"kb_ccwr_s": float,
"cswch_s": float,
"nvcswch_s": float,
"usr_ms": integer,
"system_ms": integer,
"guest_ms": integer,
"command": string
}
]
@@ -148,4 +151,4 @@ Returns:
### Parser Information
Compatibility: linux
Version 1.1 by Kelly Brazil (kellyjonbrazil@gmail.com)
Version 1.3 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -39,6 +39,7 @@ Schema:
"percent_usr": float,
"percent_system": float,
"percent_guest": float,
"percent_wait": float,
"percent_cpu": float,
"cpu": integer,
"minflt_s": float,
@@ -53,6 +54,9 @@ Schema:
"kb_ccwr_s": float,
"cswch_s": float,
"nvcswch_s": float,
"usr_ms": integer,
"system_ms": integer,
"guest_ms": integer,
"command": string,
# below object only exists if using -qq or ignore_exceptions=True
@@ -107,4 +111,4 @@ Returns:
### Parser Information
Compatibility: linux
Version 1.1 by Kelly Brazil (kellyjonbrazil@gmail.com)
Version 1.2 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -35,6 +35,8 @@ Schema:
"packets_received": integer,
"packet_loss_percent": float,
"duplicates": integer,
"errors": integer,
"corrupted": integer,
"round_trip_ms_min": float,
"round_trip_ms_avg": float,
"round_trip_ms_max": float,
@@ -185,4 +187,4 @@ Returns:
### Parser Information
Compatibility: linux, darwin, freebsd
Version 1.8 by Kelly Brazil (kellyjonbrazil@gmail.com)
Version 1.10 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -36,7 +36,7 @@ Schema:
"source_ip": string,
"destination_ip": string,
"sent_bytes": integer,
"pattern": string, # (null if not set)
"pattern": string, # null if not set
"destination": string,
"timestamp": float,
"response_bytes": integer,
@@ -49,10 +49,12 @@ Schema:
"packets_received": integer,
"packet_loss_percent": float,
"duplicates": integer,
"round_trip_ms_min": float,
"round_trip_ms_avg": float,
"round_trip_ms_max": float,
"round_trip_ms_stddev": float,
"errors": integer, # null if not set
"corrupted": integer, # null if not set
"round_trip_ms_min": float, # null if not set
"round_trip_ms_avg": float, # null if not set
"round_trip_ms_max": float, # null if not set
"round_trip_ms_stddev": float, # null if not set
# below object only exists if using -qq or ignore_exceptions=True
"_jc_meta": {
@@ -106,4 +108,4 @@ Returns:
### Parser Information
Compatibility: linux, darwin, freebsd
Version 1.2 by Kelly Brazil (kellyjonbrazil@gmail.com)
Version 1.4 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -0,0 +1,126 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.pkg_index_apk"></a>
# jc.parsers.pkg\_index\_apk
jc - JSON Convert Alpine Linux Package Index files
Usage (cli):
$ cat APKINDEX | jc --pkg-index-apk
Usage (module):
import jc
result = jc.parse('pkg_index_apk', pkg_index_apk_output)
Schema:
[
{
"checksum": string,
"package": string,
"version": string,
"architecture": string,
"package_size": integer,
"installed_size": integer,
"description": string,
"url": string,
"license": string,
"origin": string,
"maintainer": {
"name": string,
"email": string,
},
"build_time": integer,
"commit": string,
"provider_priority": string,
"dependencies": [
string
],
"provides": [
string
],
"install_if": [
string
],
}
]
Example:
$ cat APKINDEX | jc --pkg-index-apk
[
{
"checksum": "Q1znBl9k+RKgY6gl5Eg3iz73KZbLY=",
"package": "yasm",
"version": "1.3.0-r4",
"architecture": "x86_64",
"package_size": 772109,
"installed_size": 1753088,
"description": "A rewrite of NASM to allow for multiple synta...",
"url": "http://www.tortall.net/projects/yasm/",
"license": "BSD-2-Clause",
"origin": "yasm",
"maintainer": {
"name": "Natanael Copa",
"email": "ncopa@alpinelinux.org"
},
"build_time": 1681228881,
"commit": "84a227baf001b6e0208e3352b294e4d7a40e93de",
"dependencies": [
"so:libc.musl-x86_64.so.1"
],
"provides": [
"cmd:vsyasm=1.3.0-r4",
"cmd:yasm=1.3.0-r4",
"cmd:ytasm=1.3.0-r4"
]
}
]
$ cat APKINDEX | jc --pkg-index-apk --raw
[
{
"C": "Q1znBl9k+RKgY6gl5Eg3iz73KZbLY=",
"P": "yasm",
"V": "1.3.0-r4",
"A": "x86_64",
"S": "772109",
"I": "1753088",
"T": "A rewrite of NASM to allow for multiple syntax supported...",
"U": "http://www.tortall.net/projects/yasm/",
"L": "BSD-2-Clause",
"o": "yasm",
"m": "Natanael Copa <ncopa@alpinelinux.org>",
"t": "1681228881",
"c": "84a227baf001b6e0208e3352b294e4d7a40e93de",
"D": "so:libc.musl-x86_64.so.1",
"p": "cmd:vsyasm=1.3.0-r4 cmd:yasm=1.3.0-r4 cmd:ytasm=1.3.0-r4"
},
]
<a id="jc.parsers.pkg_index_apk.parse"></a>
### parse
```python
def parse(data: str, raw: bool = False, quiet: bool = False) -> List[Dict]
```
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of Dictionaries. Raw or processed structured data.
### Parser Information
Compatibility: linux, darwin, cygwin, win32, aix, freebsd
Version 1.0 by Roey Darwish Dror (roey.ghost@gmail.com)

View File

@@ -0,0 +1,138 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.pkg_index_deb"></a>
# jc.parsers.pkg\_index\_deb
jc - JSON Convert Debian Package Index file parser
Usage (cli):
$ cat Packages | jc --pkg-index-deb
Usage (module):
import jc
result = jc.parse('pkg_index_deb', pkg_index_deb_output)
Schema:
[
{
"package": string,
"version": string,
"architecture": string,
"section": string,
"priority": string,
"installed_size": integer,
"maintainer": string,
"description": string,
"homepage": string,
"depends": string,
"conflicts": string,
"replaces": string,
"vcs_git": string,
"sha256": string,
"size": integer,
"vcs_git": string,
"filename": string
}
]
Examples:
$ cat Packages | jc --pkg-index-deb
[
{
"package": "aspnetcore-runtime-2.1",
"version": "2.1.22-1",
"architecture": "amd64",
"section": "devel",
"priority": "standard",
"installed_size": 71081,
"maintainer": "Microsoft <nugetaspnet@microsoft.com>",
"description": "Microsoft ASP.NET Core 2.1.22 Shared Framework",
"homepage": "https://www.asp.net/",
"depends": "libc6 (>= 2.14), dotnet-runtime-2.1 (>= 2.1.22)",
"sha256": "48d4e78a7ceff34105411172f4c3e91a0359b3929d84d26a493...",
"size": 21937036,
"filename": "pool/main/a/aspnetcore-runtime-2.1/aspnetcore-run..."
},
{
"package": "azure-functions-core-tools-4",
"version": "4.0.4590-1",
"architecture": "amd64",
"section": "devel",
"priority": "optional",
"maintainer": "Ahmed ElSayed <ahmels@microsoft.com>",
"description": "Azure Function Core Tools v4",
"homepage": "https://docs.microsoft.com/en-us/azure/azure-func...",
"conflicts": "azure-functions-core-tools-2, azure-functions-co...",
"replaces": "azure-functions-core-tools-2, azure-functions-cor...",
"vcs_git": "https://github.com/Azure/azure-functions-core-tool...",
"sha256": "a2a4f99d6d98ba0a46832570285552f2a93bab06cebbda2afc7...",
"size": 124417844,
"filename": "pool/main/a/azure-functions-core-tools-4/azure-fu..."
}
]
$ cat Packages | jc --pkg-index-deb -r
[
{
"package": "aspnetcore-runtime-2.1",
"version": "2.1.22-1",
"architecture": "amd64",
"section": "devel",
"priority": "standard",
"installed_size": "71081",
"maintainer": "Microsoft <nugetaspnet@microsoft.com>",
"description": "Microsoft ASP.NET Core 2.1.22 Shared Framework",
"homepage": "https://www.asp.net/",
"depends": "libc6 (>= 2.14), dotnet-runtime-2.1 (>= 2.1.22)",
"sha256": "48d4e78a7ceff34105411172f4c3e91a0359b3929d84d26a493...",
"size": "21937036",
"filename": "pool/main/a/aspnetcore-runtime-2.1/aspnetcore-run..."
},
{
"package": "azure-functions-core-tools-4",
"version": "4.0.4590-1",
"architecture": "amd64",
"section": "devel",
"priority": "optional",
"maintainer": "Ahmed ElSayed <ahmels@microsoft.com>",
"description": "Azure Function Core Tools v4",
"homepage": "https://docs.microsoft.com/en-us/azure/azure-func...",
"conflicts": "azure-functions-core-tools-2, azure-functions-co...",
"replaces": "azure-functions-core-tools-2, azure-functions-cor...",
"vcs_git": "https://github.com/Azure/azure-functions-core-tool...",
"sha256": "a2a4f99d6d98ba0a46832570285552f2a93bab06cebbda2afc7...",
"size": "124417844",
"filename": "pool/main/a/azure-functions-core-tools-4/azure-fu..."
}
]
<a id="jc.parsers.pkg_index_deb.parse"></a>
### parse
```python
def parse(data: str,
raw: bool = False,
quiet: bool = False) -> List[JSONDictType]
```
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of Dictionaries. Raw or processed structured data.
### Parser Information
Compatibility: linux, darwin, cygwin, win32, aix, freebsd
Version 1.0 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -139,4 +139,4 @@ Returns:
### Parser Information
Compatibility: linux
Version 1.1 by Kelly Brazil (kellyjonbrazil@gmail.com)
Version 1.2 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -0,0 +1,92 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.proc_cmdline"></a>
# jc.parsers.proc\_cmdline
jc - JSON Convert `/proc/cmdline` file parser
Usage (cli):
$ cat /proc/cmdline | jc --proc
or
$ jc /proc/cmdline
or
$ cat /proc/cmdline | jc --proc-cmdline
Usage (module):
import jc
result = jc.parse('proc_cmdline', proc_cmdline_file)
Schema:
{
"<key>": string,
"_options": [
string
]
}
Examples:
$ cat /proc/cmdline | jc --proc -p
{
"BOOT_IMAGE": "clonezilla/live/vmlinuz",
"consoleblank": "0",
"keyboard-options": "grp:ctrl_shift_toggle,lctrl_shift_toggle",
"ethdevice-timeout": "130",
"toram": "filesystem.squashfs",
"boot": "live",
"edd": "on",
"ocs_daemonon": "ssh lighttpd",
"ocs_live_run": "sudo screen /usr/sbin/ocs-sr -g auto -e1 auto -e2 -batch -r -j2 -k -scr -p true restoreparts win7-64 sda1",
"ocs_live_extra_param": "",
"keyboard-layouts": "us,ru",
"ocs_live_batch": "no",
"locales": "ru_RU.UTF-8",
"vga": "788",
"net.ifnames": "0",
"union": "overlay",
"fetch": "http://10.1.1.1/tftpboot/clonezilla/live/filesystem.squashfs",
"ocs_postrun99": "sudo reboot",
"initrd": "clonezilla/live/initrd.img",
"_options": [
"config",
"noswap",
"nolocales",
"nomodeset",
"noprompt",
"nosplash",
"nodmraid",
"components"
]
}
<a id="jc.parsers.proc_cmdline.parse"></a>
### parse
```python
def parse(data: str, raw: bool = False, quiet: bool = False) -> JSONDictType
```
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
Dictionary. Raw or processed structured data.
### Parser Information
Compatibility: linux
Version 1.0 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -0,0 +1,186 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.proc_net_tcp"></a>
# jc.parsers.proc\_net\_tcp
jc - JSON Convert `/proc/net/tcp` and `proc/net/tcp6` file parser
IPv4 and IPv6 addresses are converted to standard notation unless the raw
(--raw) option is used.
Usage (cli):
$ cat /proc/net/tcp | jc --proc
or
$ jc /proc/net/tcp
or
$ cat /proc/net/tcp | jc --proc-net-tcp
Usage (module):
import jc
result = jc.parse('proc', proc_net_tcp_file)
or
import jc
result = jc.parse('proc_net_tcp', proc_net_tcp_file)
Schema:
Field names and types gathered from the following:
https://www.kernel.org/doc/Documentation/networking/proc_net_tcp.txt
https://github.com/torvalds/linux/blob/master/net/ipv4/tcp_ipv4.c
https://github.com/torvalds/linux/blob/master/net/ipv6/tcp_ipv6.c
[
{
"entry": integer,
"local_address": string,
"local_port": integer,
"remote_address": string,
"remote_port": integer,
"state": string,
"tx_queue": string,
"rx_queue": string,
"timer_active": integer,
"jiffies_until_timer_expires": string,
"unrecovered_rto_timeouts": string,
"uid": integer,
"unanswered_0_window_probes": integer,
"inode": integer,
"sock_ref_count": integer,
"sock_mem_loc": string,
"retransmit_timeout": integer,
"soft_clock_tick": integer,
"ack_quick_pingpong": integer,
"sending_congestion_window": integer,
"slow_start_size_threshold": integer
}
]
Examples:
$ cat /proc/net/tcp | jc --proc -p
[
{
"entry": "0",
"local_address": "10.0.0.28",
"local_port": 42082,
"remote_address": "64.12.0.108",
"remote_port": 80,
"state": "04",
"tx_queue": "00000001",
"rx_queue": "00000000",
"timer_active": 1,
"jiffies_until_timer_expires": "00000015",
"unrecovered_rto_timeouts": "00000000",
"uid": 0,
"unanswered_0_window_probes": 0,
"inode": 0,
"sock_ref_count": 3,
"sock_mem_loc": "ffff8c7a0de930c0",
"retransmit_timeout": 21,
"soft_clock_tick": 4,
"ack_quick_pingpong": 30,
"sending_congestion_window": 10,
"slow_start_size_threshold": -1
},
{
"entry": "1",
"local_address": "10.0.0.28",
"local_port": 38864,
"remote_address": "104.244.42.65",
"remote_port": 80,
"state": "06",
"tx_queue": "00000000",
"rx_queue": "00000000",
"timer_active": 3,
"jiffies_until_timer_expires": "000007C5",
"unrecovered_rto_timeouts": "00000000",
"uid": 0,
"unanswered_0_window_probes": 0,
"inode": 0,
"sock_ref_count": 3,
"sock_mem_loc": "ffff8c7a12d31aa0"
},
...
]
$ cat /proc/net/tcp | jc --proc -p -r
[
{
"entry": "1",
"local_address": "1C00000A",
"local_port": "A462",
"remote_address": "6C000C40",
"remote_port": "0050",
"state": "04",
"tx_queue": "00000001",
"rx_queue": "00000000",
"timer_active": "01",
"jiffies_until_timer_expires": "00000015",
"unrecovered_rto_timeouts": "00000000",
"uid": "0",
"unanswered_0_window_probes": "0",
"inode": "0",
"sock_ref_count": "3",
"sock_mem_loc": "ffff8c7a0de930c0",
"retransmit_timeout": "21",
"soft_clock_tick": "4",
"ack_quick_pingpong": "30",
"sending_congestion_window": "10",
"slow_start_size_threshold": "-1"
},
{
"entry": "2",
"local_address": "1C00000A",
"local_port": "97D0",
"remote_address": "412AF468",
"remote_port": "0050",
"state": "06",
"tx_queue": "00000000",
"rx_queue": "00000000",
"timer_active": "03",
"jiffies_until_timer_expires": "000007C5",
"unrecovered_rto_timeouts": "00000000",
"uid": "0",
"unanswered_0_window_probes": "0",
"inode": "0",
"sock_ref_count": "3",
"sock_mem_loc": "ffff8c7a12d31aa0"
},
...
]
<a id="jc.parsers.proc_net_tcp.parse"></a>
### parse
```python
def parse(data: str, raw: bool = False, quiet: bool = False) -> List[Dict]
```
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of Dictionaries. Raw or processed structured data.
### Parser Information
Compatibility: linux
Version 1.0 by Alvin Solomon (alvinms01@gmail.com)

View File

@@ -0,0 +1,83 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.resolve_conf"></a>
# jc.parsers.resolve\_conf
jc - JSON Convert `/etc/resolve.conf` file parser
This parser may be more forgiving than the system parser. For example, if
multiple `search` lists are defined, this parser will append all entries to
the `search` field, while the system parser may only use the list from the
last defined instance.
Usage (cli):
$ cat /etc/resolve.conf | jc --resolve-conf
Usage (module):
import jc
result = jc.parse('resolve_conf', resolve_conf_output)
Schema:
{
"domain": string,
"search": [
string
],
"nameservers": [
string
],
"options": [
string
],
"sortlist": [
string
]
}
Examples:
$ cat /etc/resolve.conf | jc --resolve-conf -p
{
"search": [
"eng.myprime.com",
"dev.eng.myprime.com",
"labs.myprime.com",
"qa.myprime.com"
],
"nameservers": [
"10.136.17.15"
],
"options": [
"rotate",
"ndots:1"
]
}
<a id="jc.parsers.resolve_conf.parse"></a>
### parse
```python
def parse(data: str, raw: bool = False, quiet: bool = False) -> JSONDictType
```
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
Dictionary. Raw or processed structured data.
### Parser Information
Compatibility: linux, darwin, cygwin, win32, aix, freebsd
Version 1.0 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -22,6 +22,13 @@ Schema:
[
{
"interfaces": [
{
"id": string,
"mac": string,
"name": string,
}
]
"destination": string,
"gateway": string,
"genmask": string,
@@ -129,6 +136,6 @@ Returns:
List of Dictionaries. Raw or processed structured data.
### Parser Information
Compatibility: linux
Compatibility: linux, win32
Version 1.8 by Kelly Brazil (kellyjonbrazil@gmail.com)
Version 1.9 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -184,4 +184,4 @@ Returns:
### Parser Information
Compatibility: linux
Version 1.6 by Kelly Brazil (kellyjonbrazil@gmail.com)
Version 1.7 by Kelly Brazil (kellyjonbrazil@gmail.com)

136
docs/parsers/srt.md Normal file
View File

@@ -0,0 +1,136 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.srt"></a>
# jc.parsers.srt
jc - JSON Convert `SRT` file parser
Usage (cli):
$ cat foo.srt | jc --srt
Usage (module):
import jc
result = jc.parse('srt', srt_file_output)
Schema:
[
{
"index": int,
"start": {
"hours": int,
"minutes": int,
"seconds": int,
"milliseconds": int,
"timestamp": string
},
"end": {
"hours": int,
"minutes": int,
"seconds": int,
"milliseconds": int,
"timestamp": string
},
"content": string
}
]
Examples:
$ cat attack_of_the_clones.srt
1
00:02:16,612 --> 00:02:19,376
Senator, we're making
our final approach into Coruscant.
2
00:02:19,482 --> 00:02:21,609
Very good, Lieutenant.
...
$ cat attack_of_the_clones.srt | jc --srt
[
{
"index": 1,
"start": {
"hours": 0,
"minutes": 2,
"seconds": 16,
"milliseconds": 612,
"timestamp": "00:02:16,612"
},
"end": {
"hours": 0,
"minutes": 2,
"seconds": 19,
"milliseconds": 376,
"timestamp": "00:02:19,376"
},
"content": "Senator, we're making\nour final approach into Coruscant."
},
{
"index": 2,
"start": {
"hours": 0,
"minutes": 2,
"seconds": 19,
"milliseconds": 482,
"timestamp": "00:02:19,482"
},
"end": {
"hours": 0,
"minutes": 2,
"seconds": 21,
"milliseconds": 609,
"timestamp": "00:02:21,609"
},
"content": "Very good, Lieutenant."
},
...
]
<a id="jc.parsers.srt.parse_timestamp"></a>
### parse\_timestamp
```python
def parse_timestamp(timestamp: str) -> Dict
```
timestamp: "hours:minutes:seconds,milliseconds" --->
{
"hours": "hours",
"minutes": "minutes",
"seconds": "seconds",
"milliseconds": "milliseconds",
"timestamp": "hours:minutes:seconds,milliseconds"
}
<a id="jc.parsers.srt.parse"></a>
### parse
```python
def parse(data: str,
raw: bool = False,
quiet: bool = False) -> List[JSONDictType]
```
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
Dictionary. Raw or processed structured data.
### Parser Information
Compatibility: linux, darwin, cygwin, win32, aix, freebsd
Version 1.0 by Mark Rotner (rotner.mr@gmail.com)

View File

@@ -5,9 +5,6 @@
jc - JSON Convert `ss` command output parser
Extended information options like `-e` and `-p` are not supported and may
cause parsing irregularities.
Usage (cli):
$ ss | jc --ss
@@ -28,21 +25,29 @@ field names
[
{
"netid": string,
"state": string,
"recv_q": integer,
"send_q": integer,
"local_address": string,
"local_port": string,
"local_port_num": integer,
"peer_address": string,
"peer_port": string,
"peer_port_num": integer,
"interface": string,
"link_layer" string,
"channel": string,
"path": string,
"pid": integer
"netid": string,
"state": string,
"recv_q": integer,
"send_q": integer,
"local_address": string,
"local_port": string,
"local_port_num": integer,
"peer_address": string,
"peer_port": string,
"peer_port_num": integer,
"interface": string,
"link_layer" string,
"channel": string,
"path": string,
"pid": integer,
"opts": {
"process_id": {
"<process_id>": {
"user": string,
"file_descriptor": string
}
}
}
}
]
@@ -303,4 +308,4 @@ Returns:
### Parser Information
Compatibility: linux
Version 1.6 by Kelly Brazil (kellyjonbrazil@gmail.com)
Version 1.7 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -193,4 +193,4 @@ Returns:
### Parser Information
Compatibility: linux, darwin, freebsd
Version 1.12 by Kelly Brazil (kellyjonbrazil@gmail.com)
Version 1.13 by Kelly Brazil (kellyjonbrazil@gmail.com)

69
docs/parsers/swapon.md Normal file
View File

@@ -0,0 +1,69 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.swapon"></a>
# jc.parsers.swapon
jc - JSON Convert `swapon` command output parser
Usage (cli):
$ swapon | jc --swapon
or
$ jc swapon
Usage (module):
import jc
result = jc.parse('swapon', swapon_command_output)
Schema:
[
{
"name": string,
"type": string,
"size": integer,
"used": integer,
"priority": integer
}
]
Example:
$ swapon | jc --swapon
[
{
"name": "/swapfile",
"type": "file",
"size": 1073741824,
"used": 0,
"priority": -2
}
]
<a id="jc.parsers.swapon.parse"></a>
### parse
```python
def parse(data: str, raw: bool = False, quiet: bool = False) -> List[_Entry]
```
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
Dictionary. Raw or processed structured data.
### Parser Information
Compatibility: linux, freebsd
Version 1.0 by Roey Darwish Dror (roey.ghost@gmail.com)

235
docs/parsers/tune2fs.md Normal file
View File

@@ -0,0 +1,235 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.tune2fs"></a>
# jc.parsers.tune2fs
jc - JSON Convert `tune2fs -l` command output parser
Usage (cli):
$ tune2fs -l /dev/xvda4 | jc --tune2fs
or
$ jc tune2fs -l /dev/xvda4
Usage (module):
import jc
result = jc.parse('tune2fs', tune2fs_command_output)
Schema:
{
"version": string,
"filesystem_volume_name": string,
"last_mounted_on": string,
"filesystem_uuid": string,
"filesystem_magic_number": string,
"filesystem_revision_number": string,
"filesystem_features": [
string
],
"filesystem_flags": string,
"default_mount_options": string,
"filesystem_state": string,
"errors_behavior": string,
"filesystem_os_type": string,
"inode_count": integer,
"block_count": integer,
"reserved_block_count": integer,
"overhead_clusters": integer,
"free_blocks": integer,
"free_inodes": integer,
"first_block": integer,
"block_size": integer,
"fragment_size": integer,
"group_descriptor_size": integer,
"reserved_gdt_blocks": integer,
"blocks_per_group": integer,
"fragments_per_group": integer,
"inodes_per_group": integer,
"inode_blocks_per_group": integer,
"flex_block_group_size": integer,
"filesystem_created": string,
"filesystem_created_epoch": integer,
"filesystem_created_epoch_utc": integer,
"last_mount_time": string,
"last_mount_time_epoch": integer,
"last_mount_time_epoch_utc": integer,
"last_write_time": string,
"last_write_time_epoch": integer,
"last_write_time_epoch_utc": integer,
"mount_count": integer,
"maximum_mount_count": integer,
"last_checked": string,
"last_checked_epoch": integer,
"last_checked_epoch_utc": integer,
"check_interval": string,
"lifetime_writes": string,
"reserved_blocks_uid": string,
"reserved_blocks_gid": string,
"first_inode": integer,
"inode_size": integer,
"required_extra_isize": integer,
"desired_extra_isize": integer,
"journal_inode": integer,
"default_directory_hash": string,
"directory_hash_seed": string,
"journal_backup": string,
"checksum_type": string,
"checksum": string
}
Examples:
$ tune2fs | jc --tune2fs -p
{
"version": "1.46.2 (28-Feb-2021)",
"filesystem_volume_name": "<none>",
"last_mounted_on": "/home",
"filesystem_uuid": "5fb78e1a-b214-44e2-a309-8e35116d8dd6",
"filesystem_magic_number": "0xEF53",
"filesystem_revision_number": "1 (dynamic)",
"filesystem_features": [
"has_journal",
"ext_attr",
"resize_inode",
"dir_index",
"filetype",
"needs_recovery",
"extent",
"64bit",
"flex_bg",
"sparse_super",
"large_file",
"huge_file",
"dir_nlink",
"extra_isize",
"metadata_csum"
],
"filesystem_flags": "signed_directory_hash",
"default_mount_options": "user_xattr acl",
"filesystem_state": "clean",
"errors_behavior": "Continue",
"filesystem_os_type": "Linux",
"inode_count": 3932160,
"block_count": 15728640,
"reserved_block_count": 786432,
"free_blocks": 15198453,
"free_inodes": 3864620,
"first_block": 0,
"block_size": 4096,
"fragment_size": 4096,
"group_descriptor_size": 64,
"reserved_gdt_blocks": 1024,
"blocks_per_group": 32768,
"fragments_per_group": 32768,
"inodes_per_group": 8192,
"inode_blocks_per_group": 512,
"flex_block_group_size": 16,
"filesystem_created": "Mon Apr 6 15:10:37 2020",
"last_mount_time": "Mon Sep 19 15:16:20 2022",
"last_write_time": "Mon Sep 19 15:16:20 2022",
"mount_count": 14,
"maximum_mount_count": -1,
"last_checked": "Fri Apr 8 15:24:22 2022",
"check_interval": "0 (<none>)",
"lifetime_writes": "203 GB",
"reserved_blocks_uid": "0 (user root)",
"reserved_blocks_gid": "0 (group root)",
"first_inode": 11,
"inode_size": 256,
"required_extra_isize": 32,
"desired_extra_isize": 32,
"journal_inode": 8,
"default_directory_hash": "half_md4",
"directory_hash_seed": "67d5358d-723d-4ce3-b3c0-30ddb433ad9e",
"journal_backup": "inode blocks",
"checksum_type": "crc32c",
"checksum": "0x7809afff",
"filesystem_created_epoch": 1586211037,
"filesystem_created_epoch_utc": null,
"last_mount_time_epoch": 1663625780,
"last_mount_time_epoch_utc": null,
"last_write_time_epoch": 1663625780,
"last_write_time_epoch_utc": null,
"last_checked_epoch": 1649456662,
"last_checked_epoch_utc": null
}
$ tune2fs | jc --tune2fs -p -r
{
"version": "1.46.2 (28-Feb-2021)",
"filesystem_volume_name": "<none>",
"last_mounted_on": "/home",
"filesystem_uuid": "5fb78e1a-b214-44e2-a309-8e35116d8dd6",
"filesystem_magic_number": "0xEF53",
"filesystem_revision_number": "1 (dynamic)",
"filesystem_features": "has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum",
"filesystem_flags": "signed_directory_hash",
"default_mount_options": "user_xattr acl",
"filesystem_state": "clean",
"errors_behavior": "Continue",
"filesystem_os_type": "Linux",
"inode_count": "3932160",
"block_count": "15728640",
"reserved_block_count": "786432",
"free_blocks": "15198453",
"free_inodes": "3864620",
"first_block": "0",
"block_size": "4096",
"fragment_size": "4096",
"group_descriptor_size": "64",
"reserved_gdt_blocks": "1024",
"blocks_per_group": "32768",
"fragments_per_group": "32768",
"inodes_per_group": "8192",
"inode_blocks_per_group": "512",
"flex_block_group_size": "16",
"filesystem_created": "Mon Apr 6 15:10:37 2020",
"last_mount_time": "Mon Sep 19 15:16:20 2022",
"last_write_time": "Mon Sep 19 15:16:20 2022",
"mount_count": "14",
"maximum_mount_count": "-1",
"last_checked": "Fri Apr 8 15:24:22 2022",
"check_interval": "0 (<none>)",
"lifetime_writes": "203 GB",
"reserved_blocks_uid": "0 (user root)",
"reserved_blocks_gid": "0 (group root)",
"first_inode": "11",
"inode_size": "256",
"required_extra_isize": "32",
"desired_extra_isize": "32",
"journal_inode": "8",
"default_directory_hash": "half_md4",
"directory_hash_seed": "67d5358d-723d-4ce3-b3c0-30ddb433ad9e",
"journal_backup": "inode blocks",
"checksum_type": "crc32c",
"checksum": "0x7809afff"
}
<a id="jc.parsers.tune2fs.parse"></a>
### parse
```python
def parse(data: str, raw: bool = False, quiet: bool = False) -> JSONDictType
```
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
Dictionary. Raw or processed structured data.
### Parser Information
Compatibility: linux
Version 1.0 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -7,7 +7,7 @@ jc - JSON Convert Version string output parser
Best-effort attempt to parse various styles of version numbers. This parser
is based off of the version parser included in the CPython distutils
libary.
library.
If the version string conforms to some de facto-standard versioning rules
followed by many developers a `strict` key will be present in the output

108
docs/parsers/veracrypt.md Normal file
View File

@@ -0,0 +1,108 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.veracrypt"></a>
# jc.parsers.veracrypt
jc - JSON Convert `veracrypt` command output parser
Supports the following `veracrypt` subcommands:
- `veracrypt --text --list`
- `veracrypt --text --list --verbose`
- `veracrypt --text --volume-properties <volume>`
Usage (cli):
$ veracrypt --text --list | jc --veracrypt
or
$ jc veracrypt --text --list
Usage (module):
import jc
result = jc.parse('veracrypt', veracrypt_command_output)
Schema:
Volume:
[
{
"slot": integer,
"path": string,
"device": string,
"mountpoint": string,
"size": string,
"type": string,
"readonly": string,
"hidden_protected": string,
"encryption_algo": string,
"pk_size": string,
"sk_size": string,
"block_size": string,
"mode": string,
"prf": string,
"format_version": integer,
"backup_header": string
}
]
Examples:
$ veracrypt --text --list | jc --veracrypt -p
[
{
"slot": 1,
"path": "/dev/sdb1",
"device": "/dev/mapper/veracrypt1",
"mountpoint": "/home/bob/mount/encrypt/sdb1"
}
]
$ veracrypt --text --list --verbose | jc --veracrypt -p
[
{
"slot": 1,
"path": "/dev/sdb1",
"device": "/dev/mapper/veracrypt1",
"mountpoint": "/home/bob/mount/encrypt/sdb1",
"size": "522 MiB",
"type": "Normal",
"readonly": "No",
"hidden_protected": "No",
"encryption_algo": "AES",
"pk_size": "256 bits",
"sk_size": "256 bits",
"block_size": "128 bits",
"mode": "XTS",
"prf": "HMAC-SHA-512",
"format_version": 2,
"backup_header": "Yes"
}
]
<a id="jc.parsers.veracrypt.parse"></a>
### parse
```python
def parse(data: str,
raw: bool = False,
quiet: bool = False) -> List[JSONDictType]
```
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of Dictionaries. Raw or processed structured data.
### Parser Information
Compatibility: linux
Version 1.0 by Jake Ob (iakopap at gmail.com)

View File

@@ -149,4 +149,4 @@ Returns:
### Parser Information
Compatibility: linux
Version 1.3 by Kelly Brazil (kellyjonbrazil@gmail.com)
Version 1.4 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -123,4 +123,4 @@ Returns:
### Parser Information
Compatibility: linux
Version 1.2 by Kelly Brazil (kellyjonbrazil@gmail.com)
Version 1.3 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -158,4 +158,4 @@ Returns:
### Parser Information
Compatibility: linux, darwin, cygwin, aix, freebsd
Version 1.7 by Kelly Brazil (kellyjonbrazil@gmail.com)
Version 1.8 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -433,4 +433,4 @@ Returns:
### Parser Information
Compatibility: linux, darwin, cygwin, win32, aix, freebsd
Version 1.1 by Kelly Brazil (kellyjonbrazil@gmail.com)
Version 1.3 by Kelly Brazil (kellyjonbrazil@gmail.com)

282
docs/parsers/x509_csr.md Normal file
View File

@@ -0,0 +1,282 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.x509_csr"></a>
# jc.parsers.x509\_csr
jc - JSON Convert X.509 Certificate Request format file parser
This parser will convert DER and PEM encoded X.509 certificate request files.
Usage (cli):
$ cat certificateRequest.pem | jc --x509-csr
Usage (module):
import jc
result = jc.parse('x509_csr', x509_csr_file_output)
Schema:
[
{
"certification_request_info": {
"version": string,
"serial_number": string, # [0]
"serial_number_str": string,
"signature": {
"algorithm": string,
"parameters": string/null,
},
"issuer": {
"country_name": string,
"state_or_province_name" string,
"locality_name": string,
"organization_name": array/string,
"organizational_unit_name": array/string,
"common_name": string,
"email_address": string,
"serial_number": string, # [0]
"serial_number_str": string
},
"validity": {
"not_before": integer, # [1]
"not_after": integer, # [1]
"not_before_iso": string,
"not_after_iso": string
},
"subject": {
"country_name": string,
"state_or_province_name": string,
"locality_name": string,
"organization_name": array/string,
"organizational_unit_name": array/string,
"common_name": string,
"email_address": string,
"serial_number": string, # [0]
"serial_number_str": string
},
"subject_public_key_info": {
"algorithm": {
"algorithm": string,
"parameters": string/null,
},
"public_key": {
"modulus": string, # [0]
"public_exponent": integer
}
},
"issuer_unique_id": string/null,
"subject_unique_id": string/null,
"extensions": [
{
"extn_id": string,
"critical": boolean,
"extn_value": array/object/string/integer # [2]
}
]
},
"signature_algorithm": {
"algorithm": string,
"parameters": string/null
},
"signature_value": string # [0]
}
]
[0] in colon-delimited hex notation
[1] time-zone-aware (UTC) epoch timestamp
[2] See below for well-known Extension schemas:
Basic Constraints:
{
"extn_id": "basic_constraints",
"critical": boolean,
"extn_value": {
"ca": boolean,
"path_len_constraint": string/null
}
}
Key Usage:
{
"extn_id": "key_usage",
"critical": boolean,
"extn_value": [
string
]
}
Key Identifier:
{
"extn_id": "key_identifier",
"critical": boolean,
"extn_value": string # [0]
}
Authority Key Identifier:
{
"extn_id": "authority_key_identifier",
"critical": boolean,
"extn_value": {
"key_identifier": string, # [0]
"authority_cert_issuer": string/null,
"authority_cert_serial_number": string/null
}
}
Subject Alternative Name:
{
"extn_id": "subject_alt_name",
"critical": boolean,
"extn_value": [
string
]
}
Certificate Policies:
{
"extn_id": "certificate_policies",
"critical": boolean,
"extn_value": [
{
"policy_identifier": string,
"policy_qualifiers": [ array or null
{
"policy_qualifier_id": string,
"qualifier": string
}
]
}
]
}
Signed Certificate Timestamp List:
{
"extn_id": "signed_certificate_timestamp_list",
"critical": boolean,
"extn_value": string # [0]
}
Examples:
$ cat server.csr| jc --x509-csr -p
[
{
"certification_request_info": {
"version": "v1",
"subject": {
"common_name": "myserver.for.example"
},
"subject_pk_info": {
"algorithm": {
"algorithm": "ec",
"parameters": "secp256r1"
},
"public_key": "04:40:33:c0:91:8f:e9:46:ea:d0:dc:d0:f9:63:2..."
},
"attributes": [
{
"type": "extension_request",
"values": [
[
{
"extn_id": "extended_key_usage",
"critical": false,
"extn_value": [
"server_auth"
]
},
{
"extn_id": "subject_alt_name",
"critical": false,
"extn_value": [
"myserver.for.example"
]
}
]
]
}
]
},
"signature_algorithm": {
"algorithm": "sha384_ecdsa",
"parameters": null
},
"signature": "30:45:02:20:77:ac:5b:51:bf:c5:f5:43:02:52:ae:66:..."
}
]
$ openssl req -in server.csr | jc --x509-csr -p
[
{
"certification_request_info": {
"version": "v1",
"subject": {
"common_name": "myserver.for.example"
},
"subject_pk_info": {
"algorithm": {
"algorithm": "ec",
"parameters": "secp256r1"
},
"public_key": "04:40:33:c0:91:8f:e9:46:ea:d0:dc:d0:f9:63:2..."
},
"attributes": [
{
"type": "extension_request",
"values": [
[
{
"extn_id": "extended_key_usage",
"critical": false,
"extn_value": [
"server_auth"
]
},
{
"extn_id": "subject_alt_name",
"critical": false,
"extn_value": [
"myserver.for.example"
]
}
]
]
}
]
},
"signature_algorithm": {
"algorithm": "sha384_ecdsa",
"parameters": null
},
"signature": "30:45:02:20:77:ac:5b:51:bf:c5:f5:43:02:52:ae:66:..."
}
]
<a id="jc.parsers.x509_csr.parse"></a>
### parse
```python
def parse(data: Union[str, bytes],
raw: bool = False,
quiet: bool = False) -> List[Dict]
```
Main text parsing function
Parameters:
data: (string or bytes) text or binary data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of Dictionaries. Raw or processed structured data.
### Parser Information
Compatibility: linux, darwin, cygwin, win32, aix, freebsd
Version 1.0 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -98,4 +98,4 @@ Returns:
### Parser Information
Compatibility: linux, darwin, cygwin, win32, aix, freebsd
Version 1.7 by Kelly Brazil (kellyjonbrazil@gmail.com)
Version 1.9 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@@ -31,22 +31,24 @@ Schema:
"current_height": integer,
"maximum_width": integer,
"maximum_height": integer,
"associated_device": {
"associated_modes": [
{
"resolution_width": integer,
"resolution_height": integer,
"is_high_resolution": boolean,
"frequencies": [
{
"frequency": float,
"is_current": boolean,
"is_preferred": boolean
}
]
}
]
},
"devices": [
{
"modes": [
{
"resolution_width": integer,
"resolution_height": integer,
"is_high_resolution": boolean,
"frequencies": [
{
"frequency": float,
"is_current": boolean,
"is_preferred": boolean
}
]
}
]
}
],
"is_connected": boolean,
"is_primary": boolean,
"device_name": string,
@@ -62,24 +64,6 @@ Schema:
"rotation": string,
"reflection": string
}
],
"unassociated_devices": [
{
"associated_modes": [
{
"resolution_width": integer,
"resolution_height": integer,
"is_high_resolution": boolean,
"frequencies": [
{
"frequency": float,
"is_current": boolean,
"is_preferred": boolean
}
]
}
]
}
]
}
@@ -96,53 +80,54 @@ Examples:
"current_height": 1080,
"maximum_width": 32767,
"maximum_height": 32767,
"associated_device": {
"associated_modes": [
{
"resolution_width": 1920,
"resolution_height": 1080,
"is_high_resolution": false,
"frequencies": [
{
"frequency": 60.03,
"is_current": true,
"is_preferred": true
},
{
"frequency": 59.93,
"is_current": false,
"is_preferred": false
}
]
},
{
"resolution_width": 1680,
"resolution_height": 1050,
"is_high_resolution": false,
"frequencies": [
{
"frequency": 59.88,
"is_current": false,
"is_preferred": false
}
]
}
],
"is_connected": true,
"is_primary": true,
"device_name": "eDP1",
"resolution_width": 1920,
"resolution_height": 1080,
"offset_width": 0,
"offset_height": 0,
"dimension_width": 310,
"dimension_height": 170,
"rotation": "normal",
"reflection": "normal"
}
"devices": [
{
"modes": [
{
"resolution_width": 1920,
"resolution_height": 1080,
"is_high_resolution": false,
"frequencies": [
{
"frequency": 60.03,
"is_current": true,
"is_preferred": true
},
{
"frequency": 59.93,
"is_current": false,
"is_preferred": false
}
]
},
{
"resolution_width": 1680,
"resolution_height": 1050,
"is_high_resolution": false,
"frequencies": [
{
"frequency": 59.88,
"is_current": false,
"is_preferred": false
}
]
}
],
"is_connected": true,
"is_primary": true,
"device_name": "eDP1",
"resolution_width": 1920,
"resolution_height": 1080,
"offset_width": 0,
"offset_height": 0,
"dimension_width": 310,
"dimension_height": 170,
"rotation": "normal",
"reflection": "normal"
}
]
}
],
"unassociated_devices": []
]
}
$ xrandr --properties | jc --xrandr -p
@@ -156,56 +141,57 @@ Examples:
"current_height": 1080,
"maximum_width": 32767,
"maximum_height": 32767,
"associated_device": {
"associated_modes": [
{
"resolution_width": 1920,
"resolution_height": 1080,
"is_high_resolution": false,
"frequencies": [
{
"frequency": 60.03,
"is_current": true,
"is_preferred": true
},
{
"frequency": 59.93,
"is_current": false,
"is_preferred": false
}
]
},
{
"resolution_width": 1680,
"resolution_height": 1050,
"is_high_resolution": false,
"frequencies": [
{
"frequency": 59.88,
"is_current": false,
"is_preferred": false
}
]
}
],
"is_connected": true,
"is_primary": true,
"device_name": "eDP1",
"model_name": "ASUS VW193S",
"product_id": "54297",
"serial_number": "78L8021107",
"resolution_width": 1920,
"resolution_height": 1080,
"offset_width": 0,
"offset_height": 0,
"dimension_width": 310,
"dimension_height": 170,
"rotation": "normal",
"reflection": "normal"
}
"devices": [
{
"modes": [
{
"resolution_width": 1920,
"resolution_height": 1080,
"is_high_resolution": false,
"frequencies": [
{
"frequency": 60.03,
"is_current": true,
"is_preferred": true
},
{
"frequency": 59.93,
"is_current": false,
"is_preferred": false
}
]
},
{
"resolution_width": 1680,
"resolution_height": 1050,
"is_high_resolution": false,
"frequencies": [
{
"frequency": 59.88,
"is_current": false,
"is_preferred": false
}
]
}
],
"is_connected": true,
"is_primary": true,
"device_name": "eDP1",
"model_name": "ASUS VW193S",
"product_id": "54297",
"serial_number": "78L8021107",
"resolution_width": 1920,
"resolution_height": 1080,
"offset_width": 0,
"offset_height": 0,
"dimension_width": 310,
"dimension_height": 170,
"rotation": "normal",
"reflection": "normal"
}
]
}
],
"unassociated_devices": []
]
}
<a id="jc.parsers.xrandr.parse"></a>
@@ -231,4 +217,4 @@ Returns:
### Parser Information
Compatibility: linux, darwin, cygwin, aix, freebsd
Version 1.2 by Kevin Lyter (lyter_git at sent.com)
Version 1.4 by Kevin Lyter (code (at) lyterk.com)

View File

@@ -9,6 +9,7 @@
* [convert\_to\_int](#jc.utils.convert_to_int)
* [convert\_to\_float](#jc.utils.convert_to_float)
* [convert\_to\_bool](#jc.utils.convert_to_bool)
* [convert\_size\_to\_int](#jc.utils.convert_size_to_int)
* [input\_type\_check](#jc.utils.input_type_check)
* [timestamp](#jc.utils.timestamp)
* [\_\_init\_\_](#jc.utils.timestamp.__init__)
@@ -178,6 +179,48 @@ Returns:
True/False False unless a 'truthy' number or string is found
('y', 'yes', 'true', '1', 1, -1, etc.)
<a id="jc.utils.convert_size_to_int"></a>
### convert\_size\_to\_int
```python
def convert_size_to_int(size: str, binary: bool = False) -> Optional[int]
```
Parse a human readable data size and return the number of bytes.
Parameters:
size: (string) The human readable file size to parse.
binary: (boolean) `True` to use binary multiples of bytes
(base-2) for ambiguous unit symbols and names,
`False` to use decimal multiples of bytes (base-10).
Returns:
integer/None Integer if successful conversion, otherwise None
This function knows how to parse sizes in bytes, kilobytes, megabytes,
gigabytes, terabytes and petabytes. Some examples:
>>> convert_size_to_int('42')
42
>>> convert_size_to_int('13b')
13
>>> convert_size_to_int('5 bytes')
5
>>> convert_size_to_int('1 KB')
1000
>>> convert_size_to_int('1 kilobyte')
1000
>>> convert_size_to_int('1 KiB')
1024
>>> convert_size_to_int('1 KB', binary=True)
1024
>>> convert_size_to_int('1.5 GB')
1500000000
>>> convert_size_to_int('1.5 GB', binary=True)
1610612736
<a id="jc.utils.input_type_check"></a>
### input\_type\_check

View File

@@ -45,10 +45,6 @@ __version_info__ = tuple(int(segment) for segment in __version__.split("."))
import sys
import os
PY3 = sys.version_info[0] == 3
if PY3:
unicode = str
if sys.platform.startswith('java'):
import platform
@@ -490,10 +486,7 @@ def _get_win_folder_from_registry(csidl_name):
registry for this guarantees us the correct answer for all CSIDL_*
names.
"""
if PY3:
import winreg as _winreg
else:
import _winreg
import winreg as _winreg
shell_folder_name = {
"CSIDL_APPDATA": "AppData",

View File

@@ -145,33 +145,34 @@ class JcCli():
JC_COLORS=blue,brightblack,magenta,green
JC_COLORS=default,default,default,default
"""
input_error = False
env_colors = os.getenv('JC_COLORS')
if PYGMENTS_INSTALLED:
input_error = False
env_colors = os.getenv('JC_COLORS')
if env_colors:
color_list = env_colors.split(',')
else:
color_list = ['default', 'default', 'default', 'default']
if env_colors:
color_list = env_colors.split(',')
else:
color_list = ['default', 'default', 'default', 'default']
if len(color_list) != 4:
input_error = True
for color in color_list:
if color != 'default' and color not in PYGMENT_COLOR:
if len(color_list) != 4:
input_error = True
# if there is an issue with the env variable, just set all colors to default and move on
if input_error:
utils.warning_message(['Could not parse JC_COLORS environment variable'])
color_list = ['default', 'default', 'default', 'default']
for color in color_list:
if color != 'default' and color not in PYGMENT_COLOR:
input_error = True
# Try the color set in the JC_COLORS env variable first. If it is set to default, then fall back to default colors
self.custom_colors = {
Name.Tag: f'bold {PYGMENT_COLOR[color_list[0]]}' if color_list[0] != 'default' else f"bold {PYGMENT_COLOR['blue']}", # key names
Keyword: PYGMENT_COLOR[color_list[1]] if color_list[1] != 'default' else PYGMENT_COLOR['brightblack'], # true, false, null
Number: PYGMENT_COLOR[color_list[2]] if color_list[2] != 'default' else PYGMENT_COLOR['magenta'], # numbers
String: PYGMENT_COLOR[color_list[3]] if color_list[3] != 'default' else PYGMENT_COLOR['green'] # strings
}
# if there is an issue with the env variable, just set all colors to default and move on
if input_error:
utils.warning_message(['Could not parse JC_COLORS environment variable'])
color_list = ['default', 'default', 'default', 'default']
# Try the color set in the JC_COLORS env variable first. If it is set to default, then fall back to default colors
self.custom_colors = {
Name.Tag: f'bold {PYGMENT_COLOR[color_list[0]]}' if color_list[0] != 'default' else f"bold {PYGMENT_COLOR['blue']}", # key names
Keyword: PYGMENT_COLOR[color_list[1]] if color_list[1] != 'default' else PYGMENT_COLOR['brightblack'], # true, false, null
Number: PYGMENT_COLOR[color_list[2]] if color_list[2] != 'default' else PYGMENT_COLOR['magenta'], # numbers
String: PYGMENT_COLOR[color_list[3]] if color_list[3] != 'default' else PYGMENT_COLOR['green'] # strings
}
def set_mono(self) -> None:
"""

View File

@@ -9,7 +9,7 @@ from .jc_types import ParserInfoType, JSONDictType
from jc import appdirs
__version__ = '1.23.2'
__version__ = '1.24.0'
parsers: List[str] = [
'acpi',
@@ -34,6 +34,7 @@ parsers: List[str] = [
'csv-s',
'date',
'datetime-iso',
'debconf-show',
'df',
'dig',
'dir',
@@ -43,6 +44,7 @@ parsers: List[str] = [
'email-address',
'env',
'file',
'find',
'findmnt',
'finger',
'free',
@@ -57,6 +59,7 @@ parsers: List[str] = [
'hashsum',
'hciconfig',
'history',
'host',
'hosts',
'id',
'ifconfig',
@@ -66,7 +69,7 @@ parsers: List[str] = [
'iostat-s',
'ip-address',
'iptables',
'iso-datetime',
'ip-route',
'iw-scan',
'iwconfig',
'jar-manifest',
@@ -76,6 +79,8 @@ parsers: List[str] = [
'last',
'ls',
'ls-s',
'lsattr',
'lsb-release',
'lsblk',
'lsmod',
'lsof',
@@ -88,9 +93,11 @@ parsers: List[str] = [
'mpstat-s',
'netstat',
'nmcli',
'nsd-control',
'ntpq',
'openvpn',
'os-prober',
'os-release',
'passwd',
'pci-ids',
'pgpass',
@@ -100,10 +107,13 @@ parsers: List[str] = [
'ping-s',
'pip-list',
'pip-show',
'pkg-index-apk',
'pkg-index-deb',
'plist',
'postconf',
'proc',
'proc-buddyinfo',
'proc-cmdline',
'proc-consoles',
'proc-cpuinfo',
'proc-crypto',
@@ -142,6 +152,7 @@ parsers: List[str] = [
'proc-net-packet',
'proc-net-protocols',
'proc-net-route',
'proc-net-tcp',
'proc-net-unix',
'proc-pid-fdinfo',
'proc-pid-io',
@@ -153,6 +164,7 @@ parsers: List[str] = [
'proc-pid-statm',
'proc-pid-status',
'ps',
'resolve-conf',
'route',
'rpm-qi',
'rsync',
@@ -160,11 +172,13 @@ parsers: List[str] = [
'semver',
'sfdisk',
'shadow',
'srt',
'ss',
'ssh-conf',
'sshd-conf',
'stat',
'stat-s',
'swapon',
'sysctl',
'syslog',
'syslog-s',
@@ -183,6 +197,7 @@ parsers: List[str] = [
'top-s',
'tracepath',
'traceroute',
'tune2fs',
'udevadm',
'ufw',
'ufw-appinfo',
@@ -193,12 +208,14 @@ parsers: List[str] = [
'uptime',
'url',
'ver',
'veracrypt',
'vmstat',
'vmstat-s',
'w',
'wc',
'who',
'x509-cert',
'x509-csr',
'xml',
'xrandr',
'yaml',

View File

@@ -227,7 +227,7 @@ import jc.utils
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.6'
version = '1.7'
description = '`acpi` command parser'
author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com'
@@ -337,19 +337,15 @@ def parse(data, raw=False, quiet=False):
output_line['state'] = 'Not charging'
output_line['charge_percent'] = line.split()[-1].rstrip('%,')
if 'Charging' in line \
or 'Discharging' in line \
or 'Full' in line:
if any(word in line for word in ('Charging', 'Discharging', 'Full')):
output_line['state'] = line.split()[2][:-1]
output_line['charge_percent'] = line.split()[3].rstrip('%,')
if 'will never fully discharge' in line:
if 'will never fully discharge' in line or 'rate information unavailable' in line:
pass
elif 'rate information unavailable' not in line:
if 'Charging' in line:
output_line['until_charged'] = line.split()[4]
if 'Discharging' in line:
output_line['charge_remaining'] = line.split()[4]
elif 'Charging' in line:
output_line['until_charged'] = line.split()[4]
elif 'Discharging' in line:
output_line['charge_remaining'] = line.split()[4]
if 'design capacity' in line:
output_line['design_capacity_mah'] = line.split()[4]
@@ -359,10 +355,7 @@ def parse(data, raw=False, quiet=False):
if obj_type == 'Adapter':
output_line['type'] = obj_type
output_line['id'] = obj_id
if 'on-line' in line:
output_line['on-line'] = True
else:
output_line['on-line'] = False
output_line['on-line'] = 'on-line' in line
if obj_type == 'Thermal':
output_line['type'] = obj_type

View File

@@ -5,7 +5,7 @@ import socket
import struct
from ._errors import unwrap
from ._types import byte_cls, bytes_to_list, str_cls, type_name
from ._types import type_name
def inet_ntop(address_family, packed_ip):
@@ -33,7 +33,7 @@ def inet_ntop(address_family, packed_ip):
repr(address_family)
))
if not isinstance(packed_ip, byte_cls):
if not isinstance(packed_ip, bytes):
raise TypeError(unwrap(
'''
packed_ip must be a byte string, not %s
@@ -52,7 +52,7 @@ def inet_ntop(address_family, packed_ip):
))
if address_family == socket.AF_INET:
return '%d.%d.%d.%d' % tuple(bytes_to_list(packed_ip))
return '%d.%d.%d.%d' % tuple(list(packed_ip))
octets = struct.unpack(b'!HHHHHHHH', packed_ip)
@@ -106,7 +106,7 @@ def inet_pton(address_family, ip_string):
repr(address_family)
))
if not isinstance(ip_string, str_cls):
if not isinstance(ip_string, str):
raise TypeError(unwrap(
'''
ip_string must be a unicode string, not %s

View File

@@ -13,25 +13,16 @@ from __future__ import unicode_literals, division, absolute_import, print_functi
from encodings import idna # noqa
import codecs
import re
import sys
from ._errors import unwrap
from ._types import byte_cls, str_cls, type_name, bytes_to_list, int_types
from ._types import type_name
if sys.version_info < (3,):
from urlparse import urlsplit, urlunsplit
from urllib import (
quote as urlquote,
unquote as unquote_to_bytes,
)
else:
from urllib.parse import (
quote as urlquote,
unquote_to_bytes,
urlsplit,
urlunsplit,
)
from urllib.parse import (
quote as urlquote,
unquote_to_bytes,
urlsplit,
urlunsplit,
)
def iri_to_uri(value, normalize=False):
@@ -48,7 +39,7 @@ def iri_to_uri(value, normalize=False):
A byte string of the ASCII-encoded URI
"""
if not isinstance(value, str_cls):
if not isinstance(value, str):
raise TypeError(unwrap(
'''
value must be a unicode string, not %s
@@ -57,19 +48,7 @@ def iri_to_uri(value, normalize=False):
))
scheme = None
# Python 2.6 doesn't split properly is the URL doesn't start with http:// or https://
if sys.version_info < (2, 7) and not value.startswith('http://') and not value.startswith('https://'):
real_prefix = None
prefix_match = re.match('^[^:]*://', value)
if prefix_match:
real_prefix = prefix_match.group(0)
value = 'http://' + value[len(real_prefix):]
parsed = urlsplit(value)
if real_prefix:
value = real_prefix + value[7:]
scheme = _urlquote(real_prefix[:-3])
else:
parsed = urlsplit(value)
parsed = urlsplit(value)
if scheme is None:
scheme = _urlquote(parsed.scheme)
@@ -81,7 +60,7 @@ def iri_to_uri(value, normalize=False):
password = _urlquote(parsed.password, safe='!$&\'()*+,;=')
port = parsed.port
if port is not None:
port = str_cls(port).encode('ascii')
port = str(port).encode('ascii')
netloc = b''
if username is not None:
@@ -112,7 +91,7 @@ def iri_to_uri(value, normalize=False):
path = ''
output = urlunsplit((scheme, netloc, path, query, fragment))
if isinstance(output, str_cls):
if isinstance(output, str):
output = output.encode('latin1')
return output
@@ -128,7 +107,7 @@ def uri_to_iri(value):
A unicode string of the IRI
"""
if not isinstance(value, byte_cls):
if not isinstance(value, bytes):
raise TypeError(unwrap(
'''
value must be a byte string, not %s
@@ -148,7 +127,7 @@ def uri_to_iri(value):
if hostname:
hostname = hostname.decode('idna')
port = parsed.port
if port and not isinstance(port, int_types):
if port and not isinstance(port, int):
port = port.decode('ascii')
netloc = ''
@@ -160,7 +139,7 @@ def uri_to_iri(value):
if hostname is not None:
netloc += hostname
if port is not None:
netloc += ':' + str_cls(port)
netloc += ':' + str(port)
path = _urlunquote(parsed.path, remap=['/'], preserve=True)
query = _urlunquote(parsed.query, remap=['&', '='], preserve=True)
@@ -182,7 +161,7 @@ def _iri_utf8_errors_handler(exc):
resume at)
"""
bytes_as_ints = bytes_to_list(exc.object[exc.start:exc.end])
bytes_as_ints = list(exc.object[exc.start:exc.end])
replacements = ['%%%02x' % num for num in bytes_as_ints]
return (''.join(replacements), exc.end)
@@ -230,7 +209,7 @@ def _urlquote(string, safe=''):
string = re.sub('%[0-9a-fA-F]{2}', _extract_escape, string)
output = urlquote(string.encode('utf-8'), safe=safe.encode('utf-8'))
if not isinstance(output, byte_cls):
if not isinstance(output, bytes):
output = output.encode('ascii')
# Restore the existing quoted values that we extracted

View File

@@ -1,135 +0,0 @@
# Copyright (c) 2009 Raymond Hettinger
#
# Permission is hereby granted, free of charge, to any person
# obtaining a copy of this software and associated documentation files
# (the "Software"), to deal in the Software without restriction,
# including without limitation the rights to use, copy, modify, merge,
# publish, distribute, sublicense, and/or sell copies of the Software,
# and to permit persons to whom the Software is furnished to do so,
# subject to the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
# OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
# OTHER DEALINGS IN THE SOFTWARE.
import sys
if not sys.version_info < (2, 7):
from collections import OrderedDict
else:
from UserDict import DictMixin
class OrderedDict(dict, DictMixin):
def __init__(self, *args, **kwds):
if len(args) > 1:
raise TypeError('expected at most 1 arguments, got %d' % len(args))
try:
self.__end
except AttributeError:
self.clear()
self.update(*args, **kwds)
def clear(self):
self.__end = end = []
end += [None, end, end] # sentinel node for doubly linked list
self.__map = {} # key --> [key, prev, next]
dict.clear(self)
def __setitem__(self, key, value):
if key not in self:
end = self.__end
curr = end[1]
curr[2] = end[1] = self.__map[key] = [key, curr, end]
dict.__setitem__(self, key, value)
def __delitem__(self, key):
dict.__delitem__(self, key)
key, prev, next_ = self.__map.pop(key)
prev[2] = next_
next_[1] = prev
def __iter__(self):
end = self.__end
curr = end[2]
while curr is not end:
yield curr[0]
curr = curr[2]
def __reversed__(self):
end = self.__end
curr = end[1]
while curr is not end:
yield curr[0]
curr = curr[1]
def popitem(self, last=True):
if not self:
raise KeyError('dictionary is empty')
if last:
key = reversed(self).next()
else:
key = iter(self).next()
value = self.pop(key)
return key, value
def __reduce__(self):
items = [[k, self[k]] for k in self]
tmp = self.__map, self.__end
del self.__map, self.__end
inst_dict = vars(self).copy()
self.__map, self.__end = tmp
if inst_dict:
return (self.__class__, (items,), inst_dict)
return self.__class__, (items,)
def keys(self):
return list(self)
setdefault = DictMixin.setdefault
update = DictMixin.update
pop = DictMixin.pop
values = DictMixin.values
items = DictMixin.items
iterkeys = DictMixin.iterkeys
itervalues = DictMixin.itervalues
iteritems = DictMixin.iteritems
def __repr__(self):
if not self:
return '%s()' % (self.__class__.__name__,)
return '%s(%r)' % (self.__class__.__name__, self.items())
def copy(self):
return self.__class__(self)
@classmethod
def fromkeys(cls, iterable, value=None):
d = cls()
for key in iterable:
d[key] = value
return d
def __eq__(self, other):
if isinstance(other, OrderedDict):
if len(self) != len(other):
return False
for p, q in zip(self.items(), other.items()):
if p != q:
return False
return True
return dict.__eq__(self, other)
def __ne__(self, other):
return not self == other

View File

@@ -2,28 +2,10 @@
from __future__ import unicode_literals, division, absolute_import, print_function
import inspect
import sys
if sys.version_info < (3,):
str_cls = unicode # noqa
byte_cls = str
int_types = (int, long) # noqa
def bytes_to_list(byte_string):
return [ord(b) for b in byte_string]
chr_cls = chr
else:
str_cls = str
byte_cls = bytes
int_types = int
bytes_to_list = list
def chr_cls(num):
return bytes([num])
def chr_cls(num):
return bytes([num])
def type_name(value):

View File

@@ -48,8 +48,10 @@ Other type classes are defined that help compose the types listed above.
from __future__ import unicode_literals, division, absolute_import, print_function
from collections import OrderedDict
from datetime import datetime, timedelta
from fractions import Fraction
from io import BytesIO
import binascii
import copy
import math
@@ -58,22 +60,10 @@ import sys
from . import _teletex_codec
from ._errors import unwrap
from ._ordereddict import OrderedDict
from ._types import type_name, str_cls, byte_cls, int_types, chr_cls
from ._types import type_name, chr_cls
from .parser import _parse, _dump_header
from .util import int_to_bytes, int_from_bytes, timezone, extended_datetime, create_timezone, utc_with_dst
if sys.version_info <= (3,):
from cStringIO import StringIO as BytesIO
range = xrange # noqa
_PY2 = True
else:
from io import BytesIO
_PY2 = False
_teletex_codec.register()
@@ -220,7 +210,7 @@ class Asn1Value(object):
An instance of the current class
"""
if not isinstance(encoded_data, byte_cls):
if not isinstance(encoded_data, bytes):
raise TypeError('encoded_data must be a byte string, not %s' % type_name(encoded_data))
spec = None
@@ -291,7 +281,7 @@ class Asn1Value(object):
cls = self.__class__
# Allow explicit to be specified as a simple 2-element tuple
# instead of requiring the user make a nested tuple
if cls.explicit is not None and isinstance(cls.explicit[0], int_types):
if cls.explicit is not None and isinstance(cls.explicit[0], int):
cls.explicit = (cls.explicit, )
if hasattr(cls, '_setup'):
self._setup()
@@ -299,7 +289,7 @@ class Asn1Value(object):
# Normalize tagging values
if explicit is not None:
if isinstance(explicit, int_types):
if isinstance(explicit, int):
if class_ is None:
class_ = 'context'
explicit = (class_, explicit)
@@ -309,7 +299,7 @@ class Asn1Value(object):
tag = None
if implicit is not None:
if isinstance(implicit, int_types):
if isinstance(implicit, int):
if class_ is None:
class_ = 'context'
implicit = (class_, implicit)
@@ -336,11 +326,11 @@ class Asn1Value(object):
if explicit is not None:
# Ensure we have a tuple of 2-element tuples
if len(explicit) == 2 and isinstance(explicit[1], int_types):
if len(explicit) == 2 and isinstance(explicit[1], int):
explicit = (explicit, )
for class_, tag in explicit:
invalid_class = None
if isinstance(class_, int_types):
if isinstance(class_, int):
if class_ not in CLASS_NUM_TO_NAME_MAP:
invalid_class = class_
else:
@@ -356,7 +346,7 @@ class Asn1Value(object):
repr(invalid_class)
))
if tag is not None:
if not isinstance(tag, int_types):
if not isinstance(tag, int):
raise TypeError(unwrap(
'''
explicit tag must be an integer, not %s
@@ -379,7 +369,7 @@ class Asn1Value(object):
repr(class_)
))
if tag is not None:
if not isinstance(tag, int_types):
if not isinstance(tag, int):
raise TypeError(unwrap(
'''
implicit tag must be an integer, not %s
@@ -445,10 +435,7 @@ class Asn1Value(object):
A unicode string
"""
if _PY2:
return self.__bytes__()
else:
return self.__unicode__()
return self.__unicode__()
def __repr__(self):
"""
@@ -456,10 +443,7 @@ class Asn1Value(object):
A unicode string
"""
if _PY2:
return '<%s %s b%s>' % (type_name(self), id(self), repr(self.dump()))
else:
return '<%s %s %s>' % (type_name(self), id(self), repr(self.dump()))
return '<%s %s %s>' % (type_name(self), id(self), repr(self.dump()))
def __bytes__(self):
"""
@@ -609,10 +593,7 @@ class Asn1Value(object):
elif hasattr(self, 'chosen'):
self.chosen.debug(nest_level + 2)
else:
if _PY2 and isinstance(self.native, byte_cls):
print('%s Native: b%s' % (prefix, repr(self.native)))
else:
print('%s Native: %s' % (prefix, self.native))
print('%s Native: %s' % (prefix, self.native))
def dump(self, force=False):
"""
@@ -1058,7 +1039,7 @@ class Choice(Asn1Value):
A instance of the current class
"""
if not isinstance(encoded_data, byte_cls):
if not isinstance(encoded_data, bytes):
raise TypeError('encoded_data must be a byte string, not %s' % type_name(encoded_data))
value, _ = _parse_build(encoded_data, spec=cls, spec_params=kwargs, strict=strict)
@@ -1425,17 +1406,11 @@ class Concat(object):
def __str__(self):
"""
Since str is different in Python 2 and 3, this calls the appropriate
method, __unicode__() or __bytes__()
:return:
A unicode string
"""
if _PY2:
return self.__bytes__()
else:
return self.__unicode__()
return self.__unicode__()
def __bytes__(self):
"""
@@ -1684,7 +1659,7 @@ class Primitive(Asn1Value):
A byte string
"""
if not isinstance(value, byte_cls):
if not isinstance(value, bytes):
raise TypeError(unwrap(
'''
%s value must be a byte string, not %s
@@ -1784,7 +1759,7 @@ class AbstractString(Constructable, Primitive):
A unicode string
"""
if not isinstance(value, str_cls):
if not isinstance(value, str):
raise TypeError(unwrap(
'''
%s value must be a unicode string, not %s
@@ -1915,7 +1890,7 @@ class Integer(Primitive, ValueMap):
ValueError - when an invalid value is passed
"""
if isinstance(value, str_cls):
if isinstance(value, str):
if self._map is None:
raise ValueError(unwrap(
'''
@@ -1935,7 +1910,7 @@ class Integer(Primitive, ValueMap):
value = self._reverse_map[value]
elif not isinstance(value, int_types):
elif not isinstance(value, int):
raise TypeError(unwrap(
'''
%s value must be an integer or unicode string when a name_map
@@ -2004,7 +1979,7 @@ class _IntegerBitString(object):
# return an empty chunk, for cases like \x23\x80\x00\x00
return []
unused_bits_len = ord(self.contents[0]) if _PY2 else self.contents[0]
unused_bits_len = self.contents[0]
value = int_from_bytes(self.contents[1:])
bits = (len(self.contents) - 1) * 8
@@ -2135,7 +2110,7 @@ class BitString(_IntegerBitString, Constructable, Castable, Primitive, ValueMap)
if key in value:
bits[index] = 1
value = ''.join(map(str_cls, bits))
value = ''.join(map(str, bits))
elif value.__class__ == tuple:
if self._map is None:
@@ -2146,7 +2121,7 @@ class BitString(_IntegerBitString, Constructable, Castable, Primitive, ValueMap)
if bit:
name = self._map.get(index, index)
self._native.add(name)
value = ''.join(map(str_cls, value))
value = ''.join(map(str, value))
else:
raise TypeError(unwrap(
@@ -2220,7 +2195,7 @@ class BitString(_IntegerBitString, Constructable, Castable, Primitive, ValueMap)
A boolean if the bit is set
"""
is_int = isinstance(key, int_types)
is_int = isinstance(key, int)
if not is_int:
if not isinstance(self._map, dict):
raise ValueError(unwrap(
@@ -2266,7 +2241,7 @@ class BitString(_IntegerBitString, Constructable, Castable, Primitive, ValueMap)
ValueError - when _map is not set or the key name is invalid
"""
is_int = isinstance(key, int_types)
is_int = isinstance(key, int)
if not is_int:
if self._map is None:
raise ValueError(unwrap(
@@ -2333,8 +2308,8 @@ class BitString(_IntegerBitString, Constructable, Castable, Primitive, ValueMap)
if self._map:
self._native = set()
for index, bit in enumerate(bits):
if bit:
name = self._map.get(index, index)
if bit and index in self._map:
name = self._map.get(index)
self._native.add(name)
else:
self._native = bits
@@ -2365,7 +2340,7 @@ class OctetBitString(Constructable, Castable, Primitive):
ValueError - when an invalid value is passed
"""
if not isinstance(value, byte_cls):
if not isinstance(value, bytes):
raise TypeError(unwrap(
'''
%s value must be a byte string, not %s
@@ -2435,7 +2410,7 @@ class OctetBitString(Constructable, Castable, Primitive):
List with one tuple, consisting of a byte string and an integer (unused bits)
"""
unused_bits_len = ord(self.contents[0]) if _PY2 else self.contents[0]
unused_bits_len = self.contents[0]
if not unused_bits_len:
return [(self.contents[1:], ())]
@@ -2448,11 +2423,11 @@ class OctetBitString(Constructable, Castable, Primitive):
raise ValueError('Bit string has {0} unused bits'.format(unused_bits_len))
mask = (1 << unused_bits_len) - 1
last_byte = ord(self.contents[-1]) if _PY2 else self.contents[-1]
last_byte = self.contents[-1]
# zero out the unused bits in the last byte.
zeroed_byte = last_byte & ~mask
value = self.contents[1:-1] + (chr(zeroed_byte) if _PY2 else bytes((zeroed_byte,)))
value = self.contents[1:-1] + bytes((zeroed_byte,))
unused_bits = _int_to_bit_tuple(last_byte & mask, unused_bits_len)
@@ -2505,7 +2480,7 @@ class IntegerBitString(_IntegerBitString, Constructable, Castable, Primitive):
ValueError - when an invalid value is passed
"""
if not isinstance(value, int_types):
if not isinstance(value, int):
raise TypeError(unwrap(
'''
%s value must be a positive integer, not %s
@@ -2570,7 +2545,7 @@ class OctetString(Constructable, Castable, Primitive):
A byte string
"""
if not isinstance(value, byte_cls):
if not isinstance(value, bytes):
raise TypeError(unwrap(
'''
%s value must be a byte string, not %s
@@ -2654,7 +2629,7 @@ class IntegerOctetString(Constructable, Castable, Primitive):
ValueError - when an invalid value is passed
"""
if not isinstance(value, int_types):
if not isinstance(value, int):
raise TypeError(unwrap(
'''
%s value must be a positive integer, not %s
@@ -2752,7 +2727,7 @@ class ParsableOctetString(Constructable, Castable, Primitive):
A byte string
"""
if not isinstance(value, byte_cls):
if not isinstance(value, bytes):
raise TypeError(unwrap(
'''
%s value must be a byte string, not %s
@@ -2904,7 +2879,7 @@ class ParsableOctetBitString(ParsableOctetString):
ValueError - when an invalid value is passed
"""
if not isinstance(value, byte_cls):
if not isinstance(value, bytes):
raise TypeError(unwrap(
'''
%s value must be a byte string, not %s
@@ -2934,7 +2909,7 @@ class ParsableOctetBitString(ParsableOctetString):
A byte string
"""
unused_bits_len = ord(self.contents[0]) if _PY2 else self.contents[0]
unused_bits_len = self.contents[0]
if unused_bits_len:
raise ValueError('ParsableOctetBitString should have no unused bits')
@@ -3007,7 +2982,7 @@ class ObjectIdentifier(Primitive, ValueMap):
type_name(cls)
))
if not isinstance(value, str_cls):
if not isinstance(value, str):
raise TypeError(unwrap(
'''
value must be a unicode string, not %s
@@ -3045,7 +3020,7 @@ class ObjectIdentifier(Primitive, ValueMap):
type_name(cls)
))
if not isinstance(value, str_cls):
if not isinstance(value, str):
raise TypeError(unwrap(
'''
value must be a unicode string, not %s
@@ -3079,7 +3054,7 @@ class ObjectIdentifier(Primitive, ValueMap):
ValueError - when an invalid value is passed
"""
if not isinstance(value, str_cls):
if not isinstance(value, str):
raise TypeError(unwrap(
'''
%s value must be a unicode string, not %s
@@ -3153,24 +3128,22 @@ class ObjectIdentifier(Primitive, ValueMap):
part = 0
for byte in self.contents:
if _PY2:
byte = ord(byte)
part = part * 128
part += byte & 127
# Last byte in subidentifier has the eighth bit set to 0
if byte & 0x80 == 0:
if len(output) == 0:
if part >= 80:
output.append(str_cls(2))
output.append(str_cls(part - 80))
output.append(str(2))
output.append(str(part - 80))
elif part >= 40:
output.append(str_cls(1))
output.append(str_cls(part - 40))
output.append(str(1))
output.append(str(part - 40))
else:
output.append(str_cls(0))
output.append(str_cls(part))
output.append(str(0))
output.append(str(part))
else:
output.append(str_cls(part))
output.append(str(part))
part = 0
self._dotted = '.'.join(output)
@@ -3240,7 +3213,7 @@ class Enumerated(Integer):
ValueError - when an invalid value is passed
"""
if not isinstance(value, int_types) and not isinstance(value, str_cls):
if not isinstance(value, int) and not isinstance(value, str):
raise TypeError(unwrap(
'''
%s value must be an integer or a unicode string, not %s
@@ -3249,7 +3222,7 @@ class Enumerated(Integer):
type_name(value)
))
if isinstance(value, str_cls):
if isinstance(value, str):
if value not in self._reverse_map:
raise ValueError(unwrap(
'''
@@ -3507,7 +3480,7 @@ class Sequence(Asn1Value):
if self.children is None:
self._parse_children()
if not isinstance(key, int_types):
if not isinstance(key, int):
if key not in self._field_map:
raise KeyError(unwrap(
'''
@@ -3554,7 +3527,7 @@ class Sequence(Asn1Value):
if self.children is None:
self._parse_children()
if not isinstance(key, int_types):
if not isinstance(key, int):
if key not in self._field_map:
raise KeyError(unwrap(
'''
@@ -3605,7 +3578,7 @@ class Sequence(Asn1Value):
if self.children is None:
self._parse_children()
if not isinstance(key, int_types):
if not isinstance(key, int):
if key not in self._field_map:
raise KeyError(unwrap(
'''
@@ -4003,7 +3976,7 @@ class Sequence(Asn1Value):
encoded using
"""
if not isinstance(field_name, str_cls):
if not isinstance(field_name, str):
raise TypeError(unwrap(
'''
field_name must be a unicode string, not %s
@@ -4051,7 +4024,7 @@ class Sequence(Asn1Value):
try:
name = self._fields[index][0]
except (IndexError):
name = str_cls(index)
name = str(index)
self._native[name] = child.native
except (ValueError, TypeError) as e:
self._native = None
@@ -4879,7 +4852,7 @@ class AbstractTime(AbstractString):
A dict with the parsed values
"""
string = str_cls(self)
string = str(self)
m = self._TIMESTRING_RE.match(string)
if not m:
@@ -5018,8 +4991,6 @@ class UTCTime(AbstractTime):
raise ValueError('Year of the UTCTime is not in range [1950, 2049], use GeneralizedTime instead')
value = value.strftime('%y%m%d%H%M%SZ')
if _PY2:
value = value.decode('ascii')
AbstractString.set(self, value)
# Set it to None and let the class take care of converting the next
@@ -5117,8 +5088,6 @@ class GeneralizedTime(AbstractTime):
fraction = ''
value = value.strftime('%Y%m%d%H%M%S') + fraction + 'Z'
if _PY2:
value = value.decode('ascii')
AbstractString.set(self, value)
# Set it to None and let the class take care of converting the next
@@ -5340,7 +5309,7 @@ def _build_id_tuple(params, spec):
else:
required_class = 2
required_tag = params['implicit']
if required_class is not None and not isinstance(required_class, int_types):
if required_class is not None and not isinstance(required_class, int):
required_class = CLASS_NAME_TO_NUM_MAP[required_class]
required_class = params.get('class_', required_class)

View File

@@ -0,0 +1 @@
quiet = False

View File

@@ -20,7 +20,7 @@ import hashlib
import math
from ._errors import unwrap, APIException
from ._types import type_name, byte_cls
from ._types import type_name
from .algos import _ForceNullParameters, DigestAlgorithm, EncryptionAlgorithm, RSAESOAEPParams, RSASSAPSSParams
from .core import (
Any,
@@ -582,7 +582,7 @@ class ECPrivateKey(Sequence):
if self._key_size is None:
# Infer the key_size from the existing private key if possible
pkey_contents = self['private_key'].contents
if isinstance(pkey_contents, byte_cls) and len(pkey_contents) > 1:
if isinstance(pkey_contents, bytes) and len(pkey_contents) > 1:
self.set_key_size(len(self['private_key'].contents))
elif self._key_size is not None:
@@ -744,7 +744,7 @@ class PrivateKeyInfo(Sequence):
A PrivateKeyInfo object
"""
if not isinstance(private_key, byte_cls) and not isinstance(private_key, Asn1Value):
if not isinstance(private_key, bytes) and not isinstance(private_key, Asn1Value):
raise TypeError(unwrap(
'''
private_key must be a byte string or Asn1Value, not %s
@@ -1112,7 +1112,7 @@ class PublicKeyInfo(Sequence):
A PublicKeyInfo object
"""
if not isinstance(public_key, byte_cls) and not isinstance(public_key, Asn1Value):
if not isinstance(public_key, bytes) and not isinstance(public_key, Asn1Value):
raise TypeError(unwrap(
'''
public_key must be a byte string or Asn1Value, not %s
@@ -1268,7 +1268,7 @@ class PublicKeyInfo(Sequence):
"""
if self._sha1 is None:
self._sha1 = hashlib.sha1(byte_cls(self['public_key'])).digest()
self._sha1 = hashlib.sha1(bytes(self['public_key'])).digest()
return self._sha1
@property
@@ -1279,7 +1279,7 @@ class PublicKeyInfo(Sequence):
"""
if self._sha256 is None:
self._sha256 = hashlib.sha256(byte_cls(self['public_key'])).digest()
self._sha256 = hashlib.sha256(bytes(self['public_key'])).digest()
return self._sha256
@property

View File

@@ -15,10 +15,9 @@ from __future__ import unicode_literals, division, absolute_import, print_functi
import sys
from ._types import byte_cls, chr_cls, type_name
from ._types import chr_cls, type_name
from .util import int_from_bytes, int_to_bytes
_PY2 = sys.version_info <= (3,)
_INSUFFICIENT_DATA_MESSAGE = 'Insufficient data - %s bytes requested but only %s available'
_MAX_DEPTH = 10
@@ -66,7 +65,7 @@ def emit(class_, method, tag, contents):
if tag < 0:
raise ValueError('tag must be greater than zero, not %s' % tag)
if not isinstance(contents, byte_cls):
if not isinstance(contents, bytes):
raise TypeError('contents must be a byte string, not %s' % type_name(contents))
return _dump_header(class_, method, tag, contents) + contents
@@ -101,7 +100,7 @@ def parse(contents, strict=False):
- 5: byte string trailer
"""
if not isinstance(contents, byte_cls):
if not isinstance(contents, bytes):
raise TypeError('contents must be a byte string, not %s' % type_name(contents))
contents_len = len(contents)
@@ -130,7 +129,7 @@ def peek(contents):
An integer with the number of bytes occupied by the ASN.1 value
"""
if not isinstance(contents, byte_cls):
if not isinstance(contents, bytes):
raise TypeError('contents must be a byte string, not %s' % type_name(contents))
info, consumed = _parse(contents, len(contents))
@@ -171,7 +170,7 @@ def _parse(encoded_data, data_len, pointer=0, lengths_only=False, depth=0):
if data_len < pointer + 1:
raise ValueError(_INSUFFICIENT_DATA_MESSAGE % (1, data_len - pointer))
first_octet = ord(encoded_data[pointer]) if _PY2 else encoded_data[pointer]
first_octet = encoded_data[pointer]
pointer += 1
@@ -183,7 +182,7 @@ def _parse(encoded_data, data_len, pointer=0, lengths_only=False, depth=0):
while True:
if data_len < pointer + 1:
raise ValueError(_INSUFFICIENT_DATA_MESSAGE % (1, data_len - pointer))
num = ord(encoded_data[pointer]) if _PY2 else encoded_data[pointer]
num = encoded_data[pointer]
pointer += 1
if num == 0x80 and tag == 0:
raise ValueError('Non-minimal tag encoding')
@@ -196,7 +195,7 @@ def _parse(encoded_data, data_len, pointer=0, lengths_only=False, depth=0):
if data_len < pointer + 1:
raise ValueError(_INSUFFICIENT_DATA_MESSAGE % (1, data_len - pointer))
length_octet = ord(encoded_data[pointer]) if _PY2 else encoded_data[pointer]
length_octet = encoded_data[pointer]
pointer += 1
trailer = b''

View File

@@ -11,17 +11,13 @@ Encoding DER to PEM and decoding PEM to DER. Exports the following items:
from __future__ import unicode_literals, division, absolute_import, print_function
from io import BytesIO
import base64
import re
import sys
from ._errors import unwrap
from ._types import type_name as _type_name, str_cls, byte_cls
from ._types import type_name as _type_name
if sys.version_info < (3,):
from cStringIO import StringIO as BytesIO
else:
from io import BytesIO
def detect(byte_string):
@@ -36,7 +32,7 @@ def detect(byte_string):
string
"""
if not isinstance(byte_string, byte_cls):
if not isinstance(byte_string, bytes):
raise TypeError(unwrap(
'''
byte_string must be a byte string, not %s
@@ -67,14 +63,14 @@ def armor(type_name, der_bytes, headers=None):
A byte string of the PEM block
"""
if not isinstance(der_bytes, byte_cls):
if not isinstance(der_bytes, bytes):
raise TypeError(unwrap(
'''
der_bytes must be a byte string, not %s
''' % _type_name(der_bytes)
))
if not isinstance(type_name, str_cls):
if not isinstance(type_name, str):
raise TypeError(unwrap(
'''
type_name must be a unicode string, not %s
@@ -127,7 +123,7 @@ def _unarmor(pem_bytes):
in the form "Name: Value" that are right after the begin line.
"""
if not isinstance(pem_bytes, byte_cls):
if not isinstance(pem_bytes, bytes):
raise TypeError(unwrap(
'''
pem_bytes must be a byte string, not %s

View File

@@ -20,11 +20,11 @@ from __future__ import unicode_literals, division, absolute_import, print_functi
import math
import sys
from datetime import datetime, date, timedelta, tzinfo
from collections import OrderedDict
from datetime import datetime, date, timedelta, timezone, tzinfo
from ._errors import unwrap
from ._iri import iri_to_uri, uri_to_iri # noqa
from ._ordereddict import OrderedDict # noqa
from ._types import type_name
if sys.platform == 'win32':
@@ -33,230 +33,53 @@ else:
from socket import inet_ntop, inet_pton # noqa
# Python 2
if sys.version_info <= (3,):
def int_to_bytes(value, signed=False, width=None):
"""
Converts an integer to a byte string
def int_to_bytes(value, signed=False, width=None):
"""
Converts an integer to a byte string
:param value:
The integer to convert
:param value:
The integer to convert
:param signed:
If the byte string should be encoded using two's complement
:param signed:
If the byte string should be encoded using two's complement
:param width:
If None, the minimal possible size (but at least 1),
otherwise an integer of the byte width for the return value
:param width:
If None, the minimal possible size (but at least 1),
otherwise an integer of the byte width for the return value
:return:
A byte string
"""
:return:
A byte string
"""
if value == 0 and width == 0:
return b''
# Handle negatives in two's complement
is_neg = False
if signed and value < 0:
is_neg = True
bits = int(math.ceil(len('%x' % abs(value)) / 2.0) * 8)
value = (value + (1 << bits)) % (1 << bits)
hex_str = '%x' % value
if len(hex_str) & 1:
hex_str = '0' + hex_str
output = hex_str.decode('hex')
if signed and not is_neg and ord(output[0:1]) & 0x80:
output = b'\x00' + output
if width is not None:
if len(output) > width:
raise OverflowError('int too big to convert')
if is_neg:
pad_char = b'\xFF'
else:
pad_char = b'\x00'
output = (pad_char * (width - len(output))) + output
elif is_neg and ord(output[0:1]) & 0x80 == 0:
output = b'\xFF' + output
return output
def int_from_bytes(value, signed=False):
"""
Converts a byte string to an integer
:param value:
The byte string to convert
:param signed:
If the byte string should be interpreted using two's complement
:return:
An integer
"""
if value == b'':
return 0
num = long(value.encode("hex"), 16) # noqa
if not signed:
return num
# Check for sign bit and handle two's complement
if ord(value[0:1]) & 0x80:
bit_len = len(value) * 8
return num - (1 << bit_len)
return num
class timezone(tzinfo): # noqa
"""
Implements datetime.timezone for py2.
Only full minute offsets are supported.
DST is not supported.
"""
def __init__(self, offset, name=None):
"""
:param offset:
A timedelta with this timezone's offset from UTC
:param name:
Name of the timezone; if None, generate one.
"""
if not timedelta(hours=-24) < offset < timedelta(hours=24):
raise ValueError('Offset must be in [-23:59, 23:59]')
if offset.seconds % 60 or offset.microseconds:
raise ValueError('Offset must be full minutes')
self._offset = offset
if name is not None:
self._name = name
elif not offset:
self._name = 'UTC'
else:
self._name = 'UTC' + _format_offset(offset)
def __eq__(self, other):
"""
Compare two timezones
:param other:
The other timezone to compare to
:return:
A boolean
"""
if type(other) != timezone:
return False
return self._offset == other._offset
def __getinitargs__(self):
"""
Called by tzinfo.__reduce__ to support pickle and copy.
:return:
offset and name, to be used for __init__
"""
return self._offset, self._name
def tzname(self, dt):
"""
:param dt:
A datetime object; ignored.
:return:
Name of this timezone
"""
return self._name
def utcoffset(self, dt):
"""
:param dt:
A datetime object; ignored.
:return:
A timedelta object with the offset from UTC
"""
return self._offset
def dst(self, dt):
"""
:param dt:
A datetime object; ignored.
:return:
Zero timedelta
"""
return timedelta(0)
timezone.utc = timezone(timedelta(0))
# Python 3
else:
from datetime import timezone # noqa
def int_to_bytes(value, signed=False, width=None):
"""
Converts an integer to a byte string
:param value:
The integer to convert
:param signed:
If the byte string should be encoded using two's complement
:param width:
If None, the minimal possible size (but at least 1),
otherwise an integer of the byte width for the return value
:return:
A byte string
"""
if width is None:
if signed:
if value < 0:
bits_required = abs(value + 1).bit_length()
else:
bits_required = value.bit_length()
if bits_required % 8 == 0:
bits_required += 1
if width is None:
if signed:
if value < 0:
bits_required = abs(value + 1).bit_length()
else:
bits_required = value.bit_length()
width = math.ceil(bits_required / 8) or 1
return value.to_bytes(width, byteorder='big', signed=signed)
if bits_required % 8 == 0:
bits_required += 1
else:
bits_required = value.bit_length()
width = math.ceil(bits_required / 8) or 1
return value.to_bytes(width, byteorder='big', signed=signed)
def int_from_bytes(value, signed=False):
"""
Converts a byte string to an integer
def int_from_bytes(value, signed=False):
"""
Converts a byte string to an integer
:param value:
The byte string to convert
:param value:
The byte string to convert
:param signed:
If the byte string should be interpreted using two's complement
:param signed:
If the byte string should be interpreted using two's complement
:return:
An integer
"""
:return:
An integer
"""
return int.from_bytes(value, 'big', signed=signed)
return int.from_bytes(value, 'big', signed=signed)
def _format_offset(off):

View File

@@ -15,6 +15,7 @@ Other type classes are defined that help compose the types listed above.
from __future__ import unicode_literals, division, absolute_import, print_function
from collections import OrderedDict
from contextlib import contextmanager
from encodings import idna # noqa
import hashlib
@@ -26,8 +27,7 @@ import unicodedata
from ._errors import unwrap
from ._iri import iri_to_uri, uri_to_iri
from ._ordereddict import OrderedDict
from ._types import type_name, str_cls, bytes_to_list
from ._types import type_name
from .algos import AlgorithmIdentifier, AnyAlgorithmIdentifier, DigestAlgorithm, SignedDigestAlgorithm
from .core import (
Any,
@@ -100,7 +100,7 @@ class DNSName(IA5String):
A unicode string
"""
if not isinstance(value, str_cls):
if not isinstance(value, str):
raise TypeError(unwrap(
'''
%s value must be a unicode string, not %s
@@ -131,7 +131,7 @@ class URI(IA5String):
A unicode string
"""
if not isinstance(value, str_cls):
if not isinstance(value, str):
raise TypeError(unwrap(
'''
%s value must be a unicode string, not %s
@@ -215,7 +215,7 @@ class EmailAddress(IA5String):
A unicode string
"""
if not isinstance(value, str_cls):
if not isinstance(value, str):
raise TypeError(unwrap(
'''
%s value must be a unicode string, not %s
@@ -251,7 +251,18 @@ class EmailAddress(IA5String):
self._unicode = contents.decode('cp1252')
else:
mailbox, hostname = contents.rsplit(b'@', 1)
self._unicode = mailbox.decode('cp1252') + '@' + hostname.decode('idna')
# fix to allow incorrectly encoded email addresses to succeed with warning
try:
self._unicode = mailbox.decode('cp1252') + '@' + hostname.decode('idna')
except UnicodeDecodeError:
ascii_mailbox = mailbox.decode('ascii', errors='backslashreplace')
ascii_hostname = hostname.decode('ascii', errors='backslashreplace')
from jc.utils import warning_message
import jc.parsers.asn1crypto.jc_global as jc_global
if not jc_global.quiet:
warning_message([f'Invalid email address found: {ascii_mailbox}@{ascii_hostname}'])
self._unicode = ascii_mailbox + '@' + ascii_hostname
return self._unicode
def __ne__(self, other):
@@ -312,7 +323,7 @@ class IPAddress(OctetString):
an IPv6 address or IPv6 address with CIDR
"""
if not isinstance(value, str_cls):
if not isinstance(value, str):
raise TypeError(unwrap(
'''
%s value must be a unicode string, not %s
@@ -402,7 +413,7 @@ class IPAddress(OctetString):
if cidr_int is not None:
cidr_bits = '{0:b}'.format(cidr_int)
cidr = len(cidr_bits.rstrip('0'))
value = value + '/' + str_cls(cidr)
value = value + '/' + str(cidr)
self._native = value
return self._native
@@ -2587,7 +2598,7 @@ class Certificate(Sequence):
"""
if self._issuer_serial is None:
self._issuer_serial = self.issuer.sha256 + b':' + str_cls(self.serial_number).encode('ascii')
self._issuer_serial = self.issuer.sha256 + b':' + str(self.serial_number).encode('ascii')
return self._issuer_serial
@property
@@ -2636,7 +2647,7 @@ class Certificate(Sequence):
# We untag the element since it is tagged via being a choice from GeneralName
issuer = issuer.untag()
authority_serial = self.authority_key_identifier_value['authority_cert_serial_number'].native
self._authority_issuer_serial = issuer.sha256 + b':' + str_cls(authority_serial).encode('ascii')
self._authority_issuer_serial = issuer.sha256 + b':' + str(authority_serial).encode('ascii')
else:
self._authority_issuer_serial = None
return self._authority_issuer_serial
@@ -2849,7 +2860,7 @@ class Certificate(Sequence):
with a space between each pair of characters, all uppercase
"""
return ' '.join('%02X' % c for c in bytes_to_list(self.sha1))
return ' '.join('%02X' % c for c in list(self.sha1))
@property
def sha256(self):
@@ -2871,7 +2882,7 @@ class Certificate(Sequence):
with a space between each pair of characters, all uppercase
"""
return ' '.join('%02X' % c for c in bytes_to_list(self.sha256))
return ' '.join('%02X' % c for c in list(self.sha256))
def is_valid_domain_ip(self, domain_ip):
"""
@@ -2885,7 +2896,7 @@ class Certificate(Sequence):
A boolean - if the domain or IP is valid for the certificate
"""
if not isinstance(domain_ip, str_cls):
if not isinstance(domain_ip, str):
raise TypeError(unwrap(
'''
domain_ip must be a unicode string, not %s

View File

@@ -31,6 +31,7 @@ a controller and a device but there might be fields corresponding to one entity.
"name": string,
"is_default": boolean,
"is_public": boolean,
"is_random": boolean,
"address": string,
"alias": string,
"class": string,
@@ -49,8 +50,10 @@ a controller and a device but there might be fields corresponding to one entity.
{
"name": string,
"is_public": boolean,
"is_random": boolean,
"address": string,
"alias": string,
"appearance": string,
"class": string,
"icon": string,
"paired": string,
@@ -61,7 +64,8 @@ a controller and a device but there might be fields corresponding to one entity.
"legacy_pairing": string,
"rssi": int,
"txpower": int,
"uuids": array
"uuids": array,
"modalias": string
}
]
@@ -104,12 +108,12 @@ import jc.utils
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.0'
version = '1.1'
description = '`bluetoothctl` command parser'
author = 'Jake Ob'
author_email = 'iakopap at gmail.com'
compatible = ["linux"]
magic_commands = ["bluetoothctl"]
compatible = ['linux']
magic_commands = ['bluetoothctl']
tags = ['command']
@@ -124,6 +128,7 @@ try:
"name": str,
"is_default": bool,
"is_public": bool,
"is_random": bool,
"address": str,
"alias": str,
"class": str,
@@ -141,8 +146,10 @@ try:
{
"name": str,
"is_public": bool,
"is_random": bool,
"address": str,
"alias": str,
"appearance": str,
"class": str,
"icon": str,
"paired": str,
@@ -154,6 +161,7 @@ try:
"rssi": int,
"txpower": int,
"uuids": List[str],
"modalias": str
},
)
except ImportError:
@@ -195,6 +203,7 @@ def _parse_controller(next_lines: List[str]) -> Optional[Controller]:
"name": '',
"is_default": False,
"is_public": False,
"is_random": False,
"address": matches["address"],
"alias": '',
"class": '',
@@ -210,10 +219,12 @@ def _parse_controller(next_lines: List[str]) -> Optional[Controller]:
if name.endswith("[default]"):
controller["is_default"] = True
name = name.replace("[default]", "")
if name.endswith("(public)"):
elif name.endswith("(public)"):
controller["is_public"] = True
name = name.replace("(public)", "")
elif name.endswith("(random)"):
controller["is_random"] = True
name = name.replace("(random)", "")
controller["name"] = name.strip()
@@ -257,6 +268,7 @@ _device_head_pattern = r"Device (?P<address>([0-9A-F]{2}:){5}[0-9A-F]{2}) (?P<na
_device_line_pattern = (
r"(\s*Name:\s*(?P<name>.+)"
+ r"|\s*Alias:\s*(?P<alias>.+)"
+ r"|\s*Appearance:\s*(?P<appearance>.+)"
+ r"|\s*Class:\s*(?P<class>.+)"
+ r"|\s*Icon:\s*(?P<icon>.+)"
+ r"|\s*Paired:\s*(?P<paired>.+)"
@@ -290,8 +302,10 @@ def _parse_device(next_lines: List[str], quiet: bool) -> Optional[Device]:
device: Device = {
"name": '',
"is_public": False,
"is_random": False,
"address": matches["address"],
"alias": '',
"appearance": '',
"class": '',
"icon": '',
"paired": '',
@@ -303,11 +317,15 @@ def _parse_device(next_lines: List[str], quiet: bool) -> Optional[Device]:
"rssi": 0,
"txpower": 0,
"uuids": [],
"modalias": ''
}
if name.endswith("(public)"):
device["is_public"] = True
name = name.replace("(public)", "")
elif name.endswith("(random)"):
device["is_random"] = True
name = name.replace("(random)", "")
device["name"] = name.strip()
@@ -325,6 +343,8 @@ def _parse_device(next_lines: List[str], quiet: bool) -> Optional[Device]:
device["name"] = matches["name"]
elif matches["alias"]:
device["alias"] = matches["alias"]
elif matches["appearance"]:
device["appearance"] = matches["appearance"]
elif matches["class"]:
device["class"] = matches["class"]
elif matches["icon"]:
@@ -359,6 +379,8 @@ def _parse_device(next_lines: List[str], quiet: bool) -> Optional[Device]:
if not "uuids" in device:
device["uuids"] = []
device["uuids"].append(matches["uuid"])
elif matches["modalias"]:
device["modalias"] = matches["modalias"]
return device
@@ -376,12 +398,11 @@ def parse(data: str, raw: bool = False, quiet: bool = False) -> List[JSONDictTyp
List of Dictionaries. Raw or processed structured data.
"""
jc.utils.compatibility(__name__, info.compatible, quiet)
jc.utils.input_type_check(data)
result: List = []
if jc.utils.has_data(data):
jc.utils.compatibility(__name__, info.compatible, quiet)
jc.utils.input_type_check(data)
linedata = data.splitlines()
linedata.reverse()

View File

@@ -130,6 +130,7 @@ Examples:
}
}
"""
import re
from typing import List, Dict
from jc.jc_types import JSONDictType
import jc.utils
@@ -137,7 +138,7 @@ import jc.utils
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.0'
version = '1.2'
description = '`certbot` command parser'
author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com'
@@ -200,7 +201,9 @@ def parse(
if jc.utils.has_data(data):
if 'Found the following certs:\n' in data:
cert_pattern = re.compile(r'^Found the following certs:\r?$', re.MULTILINE)
if re.search(cert_pattern, data):
cmd_option = 'certificates'
else:
cmd_option = 'account'

View File

@@ -198,7 +198,7 @@ def _process(proc_data):
Dictionary. Structured data to conform to the schema.
"""
# put itmes in lists
# put items in lists
try:
for entry in proc_data['schedule']:
entry['minute'] = entry['minute'].split(',')

View File

@@ -194,7 +194,7 @@ def _process(proc_data):
Dictionary. Structured data to conform to the schema.
"""
# put itmes in lists
# put items in lists
try:
for entry in proc_data['schedule']:
entry['minute'] = entry['minute'].split(',')

149
jc/parsers/debconf_show.py Normal file
View File

@@ -0,0 +1,149 @@
"""jc - JSON Convert `debconf-show` command output parser
Usage (cli):
$ debconf-show onlyoffice-documentserver | jc --debconf-show
or
$ jc debconf-show onlyoffice-documentserver
Usage (module):
import jc
result = jc.parse('debconf_show', debconf_show_command_output)
Schema:
[
{
"asked": boolean,
"packagename": string,
"name": string,
"value": string
}
]
Examples:
$ debconf-show onlyoffice-documentserver | jc --debconf-show -p
[
{
"asked": true,
"packagename": "onlyoffice",
"name": "jwt_secret",
"value": "aL8ei2iereuzee7cuJ6Cahjah1ixee2ah"
},
{
"asked": false,
"packagename": "onlyoffice",
"name": "db_pwd",
"value": "(password omitted)"
},
{
"asked": true,
"packagename": "onlyoffice",
"name": "rabbitmq_pwd",
"value": "(password omitted)"
},
{
"asked": true,
"packagename": "onlyoffice",
"name": "db_port",
"value": "5432"
},
{
"asked": true,
"packagename": "onlyoffice",
"name": "db_user",
"value": "onlyoffice"
},
{
"asked": true,
"packagename": "onlyoffice",
"name": "rabbitmq_proto",
"value": "amqp"
},
{
"asked": true,
"packagename": "onlyoffice",
"name": "cluster_mode",
"value": "false"
}
]
"""
from typing import List, Dict
from jc.jc_types import JSONDictType
import jc.utils
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.0'
description = '`debconf-show` command parser'
author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com'
compatible = ['linux']
tags = ['command']
magic_commands = ['debconf-show']
__version__ = info.version
def _process(proc_data: JSONDictType) -> List[JSONDictType]:
"""
Final processing to conform to the schema.
Parameters:
proc_data: (Dictionary) raw structured data to process
Returns:
List of Dictionaries. Structured to conform to the schema.
"""
return proc_data
def parse(
data: str,
raw: bool = False,
quiet: bool = False
) -> List[JSONDictType]:
"""
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of Dictionaries. Raw or processed structured data.
"""
jc.utils.compatibility(__name__, info.compatible, quiet)
jc.utils.input_type_check(data)
raw_output: List = []
if jc.utils.has_data(data):
for line in filter(None, data.splitlines()):
output_line: Dict = {}
splitline = line.split(':', maxsplit=1)
output_line['asked'] = splitline[0].startswith('*')
packagename, key = splitline[0].split('/', maxsplit=1)
packagename = packagename[2:]
key = key.replace('-', '_')
val = splitline[1].strip()
output_line['packagename'] = packagename
output_line['name'] = key
output_line['value'] = val
raw_output.append(output_line)
return raw_output if raw else _process(raw_output)

View File

@@ -4,6 +4,7 @@ Options supported:
- `+noall +answer` options are supported in cases where only the answer
information is desired.
- `+axfr` option is supported on its own
- `+nsid` option is supported
The `when_epoch` calculated timestamp field is naive. (i.e. based on the
local time of the system the parser is run on)
@@ -322,7 +323,7 @@ import jc.utils
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '2.4'
version = '2.5'
description = '`dig` command parser'
author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com'
@@ -427,6 +428,7 @@ def _parse_flags_line(flagsline):
def _parse_opt_pseudosection(optline):
# ;; OPT PSEUDOSECTION:
# ; EDNS: version: 0, flags:; udp: 4096
# ; NSID: 67 70 64 6e 73 2d 73 66 6f ("gpdns-sfo")
# ; COOKIE: 1cbc06703eaef210
if optline.startswith('; EDNS:'):
optline_list = optline.replace(',', ' ').split(';')
@@ -443,11 +445,18 @@ def _parse_opt_pseudosection(optline):
}
}
elif optline.startswith('; COOKIE:'):
if optline.startswith('; COOKIE:'):
return {
'cookie': optline.split()[2]
}
if optline.startswith('; NSID:'):
return {
'nsid': optline.split('("')[-1].rstrip('")')
}
return {}
def _parse_question(question):
# ;www.cnn.com. IN A

View File

@@ -67,12 +67,13 @@ Examples:
"_": "/usr/bin/env"
}
"""
import re
import jc.utils
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.4'
version = '1.5'
description = '`env` command parser'
author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com'
@@ -83,6 +84,7 @@ class info():
__version__ = info.version
VAR_DEF_PATTERN = re.compile(r'^[a-zA-Z_][a-zA-Z0-9_]*=\S*.*$')
def _process(proc_data):
"""
@@ -96,8 +98,6 @@ def _process(proc_data):
List of Dictionaries. Structured data to conform to the schema.
"""
# rebuild output for added semantic information
processed = []
for k, v in proc_data.items():
proc_line = {}
@@ -120,24 +120,29 @@ def parse(data, raw=False, quiet=False):
Returns:
Dictionary of raw structured data or
List of Dictionaries of processed structured data
Dictionary of raw structured data or (default)
List of Dictionaries of processed structured data (raw)
"""
jc.utils.compatibility(__name__, info.compatible, quiet)
jc.utils.input_type_check(data)
raw_output = {}
# Clear any blank lines
cleandata = list(filter(None, data.splitlines()))
key = ''
value = None
if jc.utils.has_data(data):
for line in data.splitlines():
if VAR_DEF_PATTERN.match(line):
if not value is None:
raw_output[key] = value
key, value = line.split('=', maxsplit=1)
continue
for entry in cleandata:
parsed_line = entry.split('=', maxsplit=1)
raw_output[parsed_line[0]] = parsed_line[1]
if not value is None:
value = value + '\n' + line
if not value is None:
raw_output[key] = value
return raw_output if raw else _process(raw_output)
if raw:
return raw_output
else:
return _process(raw_output)

137
jc/parsers/find.py Normal file
View File

@@ -0,0 +1,137 @@
"""jc - JSON Convert `find` command output parser
This parser returns a list of objects by default and a list of strings if
the `--raw` option is used.
Usage (cli):
$ find | jc --find
Usage (module):
import jc
result = jc.parse('find', find_command_output)
Schema:
[
{
"path": string,
"node": string,
"error": string
}
]
Examples:
$ find | jc --find -p
[
{
"path": "./directory"
"node": "filename"
},
{
"path": "./anotherdirectory"
"node": "anotherfile"
},
{
"path": null
"node": null
"error": "find: './inaccessible': Permission denied"
}
...
]
$ find | jc --find -p -r
[
"./templates/readme_template",
"./templates/manpage_template",
"./.github/workflows/pythonapp.yml",
...
]
"""
import jc.utils
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.0'
description = '`find` command parser'
author = 'Solomon Leang'
author_email = 'solomonleang@gmail.com'
compatible = ['linux']
tags = ['command']
__version__ = info.version
def _process(proc_data):
"""
Final processing to conform to the schema.
Parameters:
proc_data: (List of Strings) raw structured data to process
Returns:
List of Dictionaries. Structured data to conform to the schema.
"""
processed = []
for index in proc_data:
path, node, error = "", "", ""
if index == ".":
node = "."
elif index.startswith('find: '):
error = index
else:
try:
path, node = index.rsplit('/', maxsplit=1)
except ValueError:
pass
proc_line = {
'path': path if path else None,
'node': node if node else None
}
if error:
proc_line.update(
{'error': error}
)
processed.append(proc_line)
return processed
def parse(data, raw=False, quiet=False):
"""
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of raw strings or
List of Dictionaries of processed structured data
"""
jc.utils.compatibility(__name__, info.compatible, quiet)
jc.utils.input_type_check(data)
raw_output = []
if jc.utils.has_data(data):
raw_output = data.splitlines()
if raw:
return raw_output
else:
return _process(raw_output)

243
jc/parsers/host.py Normal file
View File

@@ -0,0 +1,243 @@
"""jc - JSON Convert `host` command output parser
Supports parsing of the most commonly used RR types (A, AAAA, MX, TXT)
Usage (cli):
$ host google.com | jc --host
or
$ jc host google.com
Usage (module):
import jc
result = jc.parse('host', host_command_output)
Schema:
[
{
"hostname": string,
"address": [
string
],
"v6-address": [
string
],
"mail": [
string
]
}
]
[
{
"nameserver": string,
"zone": string,
"mname": string,
"rname": string,
"serial": integer,
"refresh": integer,
"retry": integer,
"expire": integer,
"minimum": integer
}
]
Examples:
$ host google.com | jc --host
[
{
"hostname": "google.com",
"address": [
"142.251.39.110"
],
"v6-address": [
"2a00:1450:400e:811::200e"
],
"mail": [
"smtp.google.com."
]
}
]
$ jc host -C sunet.se
[
{
"nameserver": "2001:6b0:7::2",
"zone": "sunet.se",
"mname": "sunic.sunet.se.",
"rname": "hostmaster.sunet.se.",
"serial": "2023090401",
"refresh": "28800",
"retry": "7200",
"expire": "604800",
"minimum": "300"
},
{
...
}
]
"""
from typing import Dict, List
import jc.utils
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.0'
description = '`host` command parser'
author = 'Pettai'
author_email = 'pettai@sunet.se'
compatible = ['linux', 'darwin', 'cygwin', 'win32', 'aix', 'freebsd']
tags = ['command']
magic_commands = ['host']
__version__ = info.version
def _process(proc_data):
"""
Final processing to conform to the schema.
Parameters:
proc_data: (List of Dictionaries) raw structured data to process
Returns:
List of Dictionaries. Structured to conform to the schema.
"""
int_list = {'serial', 'refresh', 'retry', 'expire', 'minimum'}
for entry in proc_data:
for key in entry:
if key in int_list:
entry[key] = jc.utils.convert_to_int(entry[key])
return proc_data
def parse(data: str, raw: bool = False, quiet: bool = False):
"""
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of Dictionaries. Raw or processed structured data.
"""
jc.utils.compatibility(__name__, info.compatible, quiet)
jc.utils.input_type_check(data)
raw_output: List[Dict] = []
warned = False
if jc.utils.has_data(data):
addresses = []
v6addresses = []
mail = []
text = []
rrdata = {}
soaparse = False
for line in filter(None, data.splitlines()):
line = line.strip()
# default
if ' has address ' in line:
linedata = line.split(' ', maxsplit=3)
hostname = linedata[0]
address = linedata[3]
addresses.append(address)
rrdata.update({'hostname': hostname})
rrdata.update({'address': addresses})
continue
if ' has IPv6 address ' in line:
linedata = line.split(' ', maxsplit=4)
hostname = linedata[0]
v6address = linedata[4]
v6addresses.append(v6address)
rrdata.update({'hostname': hostname})
rrdata.update({'v6-address': v6addresses})
continue
if ' mail is handled by ' in line:
linedata = line.split(' ', maxsplit=6)
hostname = linedata[0]
mx = linedata[6]
mail.append(mx)
rrdata.update({'hostname': hostname})
rrdata.update({'mail': mail})
continue
# TXT parsing
if ' descriptive text ' in line:
linedata = line.split('descriptive text "', maxsplit=1)
hostname = linedata[0]
txt = linedata[1].strip('"')
text.append(txt)
rrdata.update({'hostname': hostname})
rrdata.update({'text': text})
continue
# -C / SOA parsing
if line.startswith('Nameserver '):
soaparse = True
rrdata = {}
linedata = line.split(' ', maxsplit=1)
nameserverip = linedata[1].rstrip(':')
rrdata.update({'nameserver': nameserverip})
continue
if ' has SOA record ' in line:
linedata = line.split(' ', maxsplit=10)
zone = linedata[0]
mname = linedata[4]
rname = linedata[5]
serial = linedata[6]
refresh = linedata[7]
retry = linedata[8]
expire = linedata[9]
minimum = linedata[10]
try:
rrdata.update(
{
'zone': zone,
'mname': mname,
'rname': rname,
'serial': serial,
'refresh': refresh,
'retry': retry,
'expire': expire,
'minimum': minimum
},
)
raw_output.append(rrdata)
except IndexError:
if not warned:
jc.utils.warning_message(['Unknown format detected.'])
warned = True
if not soaparse:
raw_output.append(rrdata)
return raw_output if raw else _process(raw_output)

689
jc/parsers/iftop.py Normal file
View File

@@ -0,0 +1,689 @@
"""jc - JSON Convert `iftop` command output parser
Usage (cli):
$ iftop -i <device> -t -B -s1 | jc --iftop
Usage (module):
import jc
result = jc.parse('iftop', iftop_command_output)
Schema:
[
{
"device": string,
"ip_address": string,
"mac_address": string,
"clients": [
{
"index": integer,
"connections": [
{
"host_name": string,
"host_port": string, # can be service or missing
"last_2s": integer,
"last_10s": integer,
"last_40s": integer,
"cumulative": integer,
"direction": string
}
]
}
]
"total_send_rate": {
"last_2s": integer,
"last_10s": integer,
"last_40s": integer
}
"total_receive_rate": {
"last_2s": integer,
"last_10s": integer,
"last_40s": integer
}
"total_send_and_receive_rate": {
"last_2s": integer,
"last_10s": integer,
"last_40s": integer
}
"peak_rate": {
"last_2s": integer,
"last_10s": integer,
"last_40s": integer
}
"cumulative_rate": {
"last_2s": integer,
"last_10s": integer,
"last_40s": integer
}
}
]
Examples:
$ iftop -i enp0s3 -t -P -s1 | jc --iftop -p
[
{
"device": "enp0s3",
"ip_address": "10.10.15.129",
"mac_address": "08:00:27:c0:4a:4f",
"clients": [
{
"index": 1,
"connections": [
{
"host_name": "ubuntu-2004-clean-01",
"host_port": "ssh",
"last_2s": 448,
"last_10s": 448,
"last_40s": 448,
"cumulative": 112,
"direction": "send"
},
{
"host_name": "10.10.15.72",
"host_port": "40876",
"last_2s": 208,
"last_10s": 208,
"last_40s": 208,
"cumulative": 52,
"direction": "receive"
}
]
}
],
"total_send_rate": {
"last_2s": 448,
"last_10s": 448,
"last_40s": 448
},
"total_receive_rate": {
"last_2s": 208,
"last_10s": 208,
"last_40s": 208
},
"total_send_and_receive_rate": {
"last_2s": 656,
"last_10s": 656,
"last_40s": 656
},
"peak_rate": {
"last_2s": 448,
"last_10s": 208,
"last_40s": 656
},
"cumulative_rate": {
"last_2s": 112,
"last_10s": 52,
"last_40s": 164
}
}
]
$ iftop -i enp0s3 -t -P -s1 | jc --iftop -p -r
[
{
"device": "enp0s3",
"ip_address": "10.10.15.129",
"mac_address": "11:22:33:44:55:66",
"clients": [
{
"index": 1,
"connections": [
{
"host_name": "ubuntu-2004-clean-01",
"host_port": "ssh",
"last_2s": "448b",
"last_10s": "448b",
"last_40s": "448b",
"cumulative": "112B",
"direction": "send"
},
{
"host_name": "10.10.15.72",
"host_port": "40876",
"last_2s": "208b",
"last_10s": "208b",
"last_40s": "208b",
"cumulative": "52B",
"direction": "receive"
}
]
}
],
"total_send_rate": {
"last_2s": "448b",
"last_10s": "448b",
"last_40s": "448b"
},
"total_receive_rate": {
"last_2s": "208b",
"last_10s": "208b",
"last_40s": "208b"
},
"total_send_and_receive_rate": {
"last_2s": "656b",
"last_10s": "656b",
"last_40s": "656b"
},
"peak_rate": {
"last_2s": "448b",
"last_10s": "208b",
"last_40s": "656b"
},
"cumulative_rate": {
"last_2s": "112B",
"last_10s": "52B",
"last_40s": "164B"
}
}
]
"""
import re
from typing import List, Dict
from jc.jc_types import JSONDictType
import jc.utils
from collections import namedtuple
from numbers import Number
class info:
"""Provides parser metadata (version, author, etc.)"""
version = "1.0"
description = "`iftop` command parser"
author = "Ron Green"
author_email = "11993626+georgettica@users.noreply.github.com"
compatible = ["linux"]
tags = ["command"]
__version__ = info.version
def _process(proc_data: List[JSONDictType], quiet: bool = False) -> List[JSONDictType]:
"""
Final processing to conform to the schema.
Parameters:
proc_data: (List of Dictionaries) raw structured data to process
Returns:
List of Dictionaries. Structured to conform to the schema.
"""
string_to_bytes_fields = ["last_2s", "last_10s", "last_40s", "cumulative"]
one_nesting = [
"total_send_rate",
"total_receive_rate",
"total_send_and_receive_rate",
"peak_rate",
"cumulative_rate",
]
if not proc_data:
return proc_data
for entry in proc_data:
# print(f"{entry=}")
for entry_key in entry:
# print(f"{entry_key=}")
if entry_key in one_nesting:
# print(f"{entry[entry_key]=}")
for one_nesting_item_key in entry[entry_key]:
# print(f"{one_nesting_item_key=}")
if one_nesting_item_key in string_to_bytes_fields:
entry[entry_key][one_nesting_item_key] = _parse_size(entry[entry_key][one_nesting_item_key])
elif entry_key == "clients":
for client in entry[entry_key]:
# print(f"{client=}")
if "connections" not in client:
continue
for connection in client["connections"]:
# print(f"{connection=}")
for connection_key in connection:
# print(f"{connection_key=}")
if connection_key in string_to_bytes_fields:
connection[connection_key] = _parse_size(connection[connection_key])
return proc_data
# _parse_size from https://github.com/xolox/python-humanfriendly
# Copyright (c) 2021 Peter Odding
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
# Note: this function can be replaced with jc.utils.convert_size_to_int
# in the future.
def _parse_size(size, binary=False):
"""
Parse a human readable data size and return the number of bytes.
:param size: The human readable file size to parse (a string).
:param binary: :data:`True` to use binary multiples of bytes (base-2) for
ambiguous unit symbols and names, :data:`False` to use
decimal multiples of bytes (base-10).
:returns: The corresponding size in bytes (an integer).
:raises: :exc:`InvalidSize` when the input can't be parsed.
This function knows how to parse sizes in bytes, kilobytes, megabytes,
gigabytes, terabytes and petabytes. Some examples:
>>> from humanfriendly import parse_size
>>> parse_size('42')
42
>>> parse_size('13b')
13
>>> parse_size('5 bytes')
5
>>> parse_size('1 KB')
1000
>>> parse_size('1 kilobyte')
1000
>>> parse_size('1 KiB')
1024
>>> parse_size('1 KB', binary=True)
1024
>>> parse_size('1.5 GB')
1500000000
>>> parse_size('1.5 GB', binary=True)
1610612736
"""
def tokenize(text):
tokenized_input = []
for token in re.split(r'(\d+(?:\.\d+)?)', text):
token = token.strip()
if re.match(r'\d+\.\d+', token):
tokenized_input.append(float(token))
elif token.isdigit():
tokenized_input.append(int(token))
elif token:
tokenized_input.append(token)
return tokenized_input
SizeUnit = namedtuple('SizeUnit', 'divider, symbol, name')
CombinedUnit = namedtuple('CombinedUnit', 'decimal, binary')
disk_size_units = (
CombinedUnit(SizeUnit(1000**1, 'KB', 'kilobyte'), SizeUnit(1024**1, 'KiB', 'kibibyte')),
CombinedUnit(SizeUnit(1000**2, 'MB', 'megabyte'), SizeUnit(1024**2, 'MiB', 'mebibyte')),
CombinedUnit(SizeUnit(1000**3, 'GB', 'gigabyte'), SizeUnit(1024**3, 'GiB', 'gibibyte')),
CombinedUnit(SizeUnit(1000**4, 'TB', 'terabyte'), SizeUnit(1024**4, 'TiB', 'tebibyte')),
CombinedUnit(SizeUnit(1000**5, 'PB', 'petabyte'), SizeUnit(1024**5, 'PiB', 'pebibyte')),
CombinedUnit(SizeUnit(1000**6, 'EB', 'exabyte'), SizeUnit(1024**6, 'EiB', 'exbibyte')),
CombinedUnit(SizeUnit(1000**7, 'ZB', 'zettabyte'), SizeUnit(1024**7, 'ZiB', 'zebibyte')),
CombinedUnit(SizeUnit(1000**8, 'YB', 'yottabyte'), SizeUnit(1024**8, 'YiB', 'yobibyte')),
)
tokens = tokenize(size)
if tokens and isinstance(tokens[0], Number):
# Get the normalized unit (if any) from the tokenized input.
normalized_unit = tokens[1].lower() if len(tokens) == 2 and isinstance(tokens[1], str) else ''
# If the input contains only a number, it's assumed to be the number of
# bytes. The second token can also explicitly reference the unit bytes.
if len(tokens) == 1 or normalized_unit.startswith('b'):
return int(tokens[0])
# Otherwise we expect two tokens: A number and a unit.
if normalized_unit:
# Convert plural units to singular units, for details:
# https://github.com/xolox/python-humanfriendly/issues/26
normalized_unit = normalized_unit.rstrip('s')
for unit in disk_size_units:
# First we check for unambiguous symbols (KiB, MiB, GiB, etc)
# and names (kibibyte, mebibyte, gibibyte, etc) because their
# handling is always the same.
if normalized_unit in (unit.binary.symbol.lower(), unit.binary.name.lower()):
return int(tokens[0] * unit.binary.divider)
# Now we will deal with ambiguous prefixes (K, M, G, etc),
# symbols (KB, MB, GB, etc) and names (kilobyte, megabyte,
# gigabyte, etc) according to the caller's preference.
if (normalized_unit in (unit.decimal.symbol.lower(), unit.decimal.name.lower()) or
normalized_unit.startswith(unit.decimal.symbol[0].lower())):
return int(tokens[0] * (unit.binary.divider if binary else unit.decimal.divider))
# We failed to parse the size specification.
return None
def parse(data: str, raw: bool = False, quiet: bool = False) -> List[JSONDictType]:
"""
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of Dictionaries. Raw or processed structured data.
"""
jc.utils.compatibility(__name__, info.compatible, quiet)
jc.utils.input_type_check(data)
raw_output: List[Dict] = []
interface_item: Dict = {}
current_client: Dict = {}
clients: List = []
is_previous_line_interface = False
saw_already_host_line = False
before_arrow = r"\s+(?P<index>\d+)\s+(?P<host_name>[^\s]+):(?P<host_port>[^\s]+)\s+"
before_arrow_no_port = r"\s+(?P<index>\d+)\s+(?P<host_name>[^\s]+)\s+"
after_arrow_before_newline = r"\s+(?P<send_last_2s>[^\s]+)\s+(?P<send_last_10s>[^\s]+)\s+(?P<send_last_40s>[^\s]+)\s+(?P<send_cumulative>[^\s]+)"
newline_before_arrow = r"\s+(?P<receive_ip>.+):(?P<receive_port>\w+)\s+"
newline_before_arrow_no_port = r"\s+(?P<receive_ip>.+)\s+"
after_arrow_till_end = r"\s+(?P<receive_last_2s>[^\s]+)\s+(?P<receive_last_10s>[^\s]+)\s+(?P<receive_last_40s>[^\s]+)\s+(?P<receive_cumulative>[^\s]+)"
re_linux_clients_before_newline = re.compile(
rf"{before_arrow}=>{after_arrow_before_newline}"
)
re_linux_clients_before_newline_no_port = re.compile(
rf"{before_arrow_no_port}=>{after_arrow_before_newline}"
)
re_linux_clients_after_newline_no_port = re.compile(
rf"{newline_before_arrow_no_port}<={after_arrow_till_end}"
)
re_linux_clients_after_newline = re.compile(
rf"{newline_before_arrow}<={after_arrow_till_end}"
)
re_total_send_rate = re.compile(
r"Total send rate:\s+(?P<total_send_rate_last_2s>[^\s]+)\s+(?P<total_send_rate_last_10s>[^\s]+)\s+(?P<total_send_rate_last_40s>[^\s]+)"
)
re_total_receive_rate = re.compile(
r"Total receive rate:\s+(?P<total_receive_rate_last_2s>[^\s]+)\s+(?P<total_receive_rate_last_10s>[^\s]+)\s+(?P<total_receive_rate_last_40s>[^\s]+)"
)
re_total_send_and_receive_rate = re.compile(
r"Total send and receive rate:\s+(?P<total_send_and_receive_rate_last_2s>[^\s]+)\s+(?P<total_send_and_receive_rate_last_10s>[^\s]+)\s+(?P<total_send_and_receive_rate_last_40s>[^\s]+)"
)
re_peak_rate = re.compile(
r"Peak rate \(sent/received/total\):\s+(?P<peak_rate_sent>[^\s]+)\s+(?P<peak_rate_received>[^\s]+)\s+(?P<peak_rate_total>[^\s]+)"
)
re_cumulative_rate = re.compile(
r"Cumulative \(sent/received/total\):\s+(?P<cumulative_rate_sent>[^\s]+)\s+(?P<cumulative_rate_received>[^\s]+)\s+(?P<cumulative_rate_total>[^\s]+)"
)
jc.utils.compatibility(__name__, info.compatible, quiet)
jc.utils.input_type_check(data)
if not jc.utils.has_data(data):
return raw_output if raw else _process(raw_output, quiet=quiet)
for line in filter(None, data.splitlines()):
if line.startswith("interface:"):
# Example:
# interface: enp0s3
interface_item["device"] = line.split(":")[1].strip()
elif line.startswith("IP address is:"):
# Example:
# IP address is: 10.10.15.129
interface_item["ip_address"] = line.split(":")[1].strip()
elif line.startswith("MAC address is:"):
# Example:
# MAC address is: 08:00:27:c0:4a:4f
# strip off the "MAC address is: " part
data_without_front_list = line.split(":")[1:]
# join the remaining parts back together
data_without_front = ":".join(data_without_front_list)
interface_item["mac_address"] = data_without_front.strip()
elif line.startswith("Listening on"):
# Example:
# Listening on enp0s3
pass
elif (
line.startswith("# Host name (port/service if enabled)")
and not saw_already_host_line
):
saw_already_host_line = True
# Example:
# # Host name (port/service if enabled) last 2s last 10s last 40s cumulative
pass
elif (
line.startswith("# Host name (port/service if enabled)")
and saw_already_host_line
):
old_interface_item, interface_item = interface_item, {}
interface_item.update(
{
"device": old_interface_item["device"],
"ip_address": old_interface_item["ip_address"],
"mac_address": old_interface_item["mac_address"],
}
)
elif "=>" in line and is_previous_line_interface and ":" in line:
# should not happen
pass
elif "=>" in line and not is_previous_line_interface and ":" in line:
# Example:
# 1 ubuntu-2004-clean-01:ssh => 448b 448b 448b 112B
is_previous_line_interface = True
match_raw = re_linux_clients_before_newline.match(line)
if not match_raw:
# this is a bug in iftop
continue
match_dict = match_raw.groupdict()
current_client = {}
current_client["index"] = int(match_dict["index"])
current_client["connections"] = []
current_client_send = {
"host_name": match_dict["host_name"],
"host_port": match_dict["host_port"],
"last_2s": match_dict["send_last_2s"],
"last_10s": match_dict["send_last_10s"],
"last_40s": match_dict["send_last_40s"],
"cumulative": match_dict["send_cumulative"],
"direction": "send",
}
current_client["connections"].append(current_client_send)
# not adding yet as the receive part is not yet parsed
elif "=>" in line and not is_previous_line_interface and ":" not in line:
# should not happen
pass
elif "=>" in line and is_previous_line_interface and ":" not in line:
is_previous_line_interface = True
match_raw = re_linux_clients_before_newline_no_port.match(line)
if not match_raw:
# this is a bug in iftop
continue
match_dict = match_raw.groupdict()
current_client = {}
current_client["index"] = int(match_dict["index"])
current_client["connections"] = []
current_client_send = {
"host_name": match_dict["host_name"],
"last_2s": match_dict["send_last_2s"],
"last_10s": match_dict["send_last_10s"],
"last_40s": match_dict["send_last_40s"],
"cumulative": match_dict["send_cumulative"],
"direction": "send",
}
current_client["connections"].append(current_client_send)
# not adding yet as the receive part is not yet parsed
elif "<=" in line and not is_previous_line_interface and ":" in line:
# should not happen
pass
elif "<=" in line and is_previous_line_interface and ":" in line:
# Example:
# 10.10.15.72:40876 <= 208b 208b 208b 52B
is_previous_line_interface = False
match_raw = re_linux_clients_after_newline.match(line)
if not match_raw:
# this is a bug in iftop
continue
match_dict = match_raw.groupdict()
current_client_receive = {
"host_name": match_dict["receive_ip"],
"host_port": match_dict["receive_port"],
"last_2s": match_dict["receive_last_2s"],
"last_10s": match_dict["receive_last_10s"],
"last_40s": match_dict["receive_last_40s"],
"cumulative": match_dict["receive_cumulative"],
"direction": "receive",
}
current_client["connections"].append(current_client_receive)
clients.append(current_client)
elif "<=" in line and not is_previous_line_interface and ":" not in line:
# should not happen
pass
elif "<=" in line and is_previous_line_interface and ":" not in line:
# Example:
# 10.10.15.72:40876 <= 208b 208b 208b 52B
is_previous_line_interface = False
match_raw = re_linux_clients_after_newline_no_port.match(line)
if not match_raw:
# this is a bug in iftop
continue
match_dict = match_raw.groupdict()
current_client_receive = {
"host_name": match_dict["receive_ip"],
"last_2s": match_dict["receive_last_2s"],
"last_10s": match_dict["receive_last_10s"],
"last_40s": match_dict["receive_last_40s"],
"cumulative": match_dict["receive_cumulative"],
"direction": "receive",
}
current_client["connections"].append(current_client_receive)
clients.append(current_client)
# check if all of the characters are dashes or equal signs
elif all(c == "-" for c in line):
pass
elif line.startswith("Total send rate"):
# Example:
# Total send rate: 448b 448b 448b
match_raw = re_total_send_rate.match(line)
if not match_raw:
# this is a bug in iftop
continue
match_dict = match_raw.groupdict()
interface_item["total_send_rate"] = {}
interface_item["total_send_rate"].update(
{
"last_2s": match_dict["total_send_rate_last_2s"],
"last_10s": match_dict["total_send_rate_last_10s"],
"last_40s": match_dict["total_send_rate_last_40s"],
}
)
elif line.startswith("Total receive rate"):
# Example:
# Total receive rate: 208b 208b 208b
match_raw = re_total_receive_rate.match(line)
if not match_raw:
# this is a bug in iftop
continue
match_dict = match_raw.groupdict()
interface_item["total_receive_rate"] = {}
interface_item["total_receive_rate"].update(
{
"last_2s": match_dict["total_receive_rate_last_2s"],
"last_10s": match_dict["total_receive_rate_last_10s"],
"last_40s": match_dict["total_receive_rate_last_40s"],
}
)
elif line.startswith("Total send and receive rate"):
# Example:
# Total send and receive rate: 656b 656b 656b
match_raw = re_total_send_and_receive_rate.match(line)
if not match_raw:
# this is a bug in iftop
continue
match_dict = match_raw.groupdict()
interface_item["total_send_and_receive_rate"] = {}
interface_item["total_send_and_receive_rate"].update(
{
"last_2s": match_dict["total_send_and_receive_rate_last_2s"],
"last_10s": match_dict["total_send_and_receive_rate_last_10s"],
"last_40s": match_dict["total_send_and_receive_rate_last_40s"],
}
)
elif line.startswith("Peak rate"):
match_raw = re_peak_rate.match(line)
if not match_raw:
# this is a bug in iftop
continue
match_dict = match_raw.groupdict()
interface_item["peak_rate"] = {}
interface_item["peak_rate"].update(
{
"last_2s": match_dict["peak_rate_sent"],
"last_10s": match_dict["peak_rate_received"],
"last_40s": match_dict["peak_rate_total"],
}
)
elif line.startswith("Cumulative"):
match_raw = re_cumulative_rate.match(line)
if not match_raw:
# this is a bug in iftop
continue
match_dict = match_raw.groupdict()
interface_item["cumulative_rate"] = {}
interface_item["cumulative_rate"].update(
{
"last_2s": match_dict["cumulative_rate_sent"],
"last_10s": match_dict["cumulative_rate_received"],
"last_40s": match_dict["cumulative_rate_total"],
}
)
elif all(c == "=" for c in line):
interface_item["clients"] = clients
clients = []
# keep the copy here as without it keeps the objects linked
raw_output.append(interface_item.copy())
return raw_output if raw else _process(raw_output, quiet=quiet)

View File

@@ -17,12 +17,12 @@ contained in lists/arrays.
Usage (cli):
$ cat foo.ini | jc --ini
$ cat foo.ini | jc --ini-dup
Usage (module):
import jc
result = jc.parse('ini', ini_file_output)
result = jc.parse('ini_dup', ini_file_output)
Schema:
@@ -62,7 +62,7 @@ Examples:
fruit = peach
color = green
$ cat example.ini | jc --ini -p
$ cat example.ini | jc --ini-dup -p
{
"foo": [
"fiz"
@@ -97,7 +97,7 @@ import uuid
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.0'
version = '1.1'
description = 'INI with duplicate key file parser'
author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com'

145
jc/parsers/ip_route.py Normal file
View File

@@ -0,0 +1,145 @@
"""jc - JSON Convert `ip route` command output parser
Usage (cli):
$ ip route | jc --ip-route
or
$ jc ip-route
Usage (module):
import jc
result = jc.parse('ip_route', ip_route_command_output)
Schema:
[
{
"ip": string,
"via": string,
"dev": string,
"metric": integer,
"proto": string,
"scope": string,
"src": string,
"via": string,
"status": string
}
]
Examples:
$ ip route | jc --ip-route -p
[
{
"ip": "10.0.2.0/24",
"dev": "enp0s3",
"proto": "kernel",
"scope": "link",
"src": "10.0.2.15",
"metric": 100
}
]
"""
from typing import Dict
import jc.utils
class info:
"""Provides parser metadata (version, author, etc.)"""
version = '1.0'
description = '`ip route` command parser'
author = 'Julian Jackson'
author_email = 'jackson.julian55@yahoo.com'
compatible = ['linux']
magic_commands = ['ip route']
tags = ['command']
__version__ = info.version
def parse(data, raw=False, quiet=False):
"""
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of Json objects if data is processed and Raw data if raw = true.
"""
structure = {}
items = []
lines = data.splitlines()
index = 0
place = 0
inc = 0
for line in lines:
temp = line.split()
for word in temp:
if word == 'via':
y = {'via': temp[place + 1]}
place += 1
structure.update(y)
elif word == 'dev':
y = {'dev': temp[place + 1]}
place += 1
structure.update(y)
elif word == 'metric':
if raw:
y = {'metric': temp[place + 1]}
else:
y = {'metric': jc.utils.convert_to_int(temp[place+1])}
place += 1
structure.update(y)
elif word == 'proto':
y = {'proto': temp[place + 1]}
place += 1
structure.update(y)
elif word == 'scope':
y = {'scope': temp[place + 1]}
place += 1
structure.update(y)
elif word == 'src':
y = {'src': temp[place + 1]}
place += 1
structure.update(y)
elif word == 'status':
y = {'status': temp[place + 1]}
place += 1
structure.update(y)
elif word == 'default':
y = {'ip': 'default'}
place += 1
structure.update(y)
elif word == 'linkdown':
y = {'status': 'linkdown'}
place += 1
structure.update(y)
else:
y = {'ip': temp[0]}
place += 1
structure.update(y)
if y.get("ip") != "":
items.append(structure)
structure = {}
place = 0
index += 1
inc += 1
jc.utils.compatibility(__name__, info.compatible, quiet)
jc.utils.input_type_check(data)
if not jc.utils.has_data(data):
return []
return items

View File

@@ -25,7 +25,7 @@ Schema:
"num" integer,
"pkts": integer,
"bytes": integer, # converted based on suffix
"target": string,
"target": string, # Null if blank
"prot": string,
"opt": string, # "--" = Null
"in": string,
@@ -163,7 +163,7 @@ import jc.utils
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.8'
version = '1.9'
description = '`iptables` command parser'
author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com'
@@ -222,6 +222,10 @@ def _process(proc_data):
if rule['opt'] == '--':
rule['opt'] = None
if 'target' in rule:
if rule['target'] == '':
rule['target'] = None
return proc_data
@@ -271,15 +275,18 @@ def parse(data, raw=False, quiet=False):
continue
else:
# sometimes the "target" column is blank. Stuff in a dummy character
if headers[0] == 'target' and line.startswith(' '):
line = '\u2063' + line
rule = line.split(maxsplit=len(headers) - 1)
temp_rule = dict(zip(headers, rule))
if temp_rule:
if temp_rule.get('target') == '\u2063':
temp_rule['target'] = ''
chain['rules'].append(temp_rule)
if chain:
raw_output.append(chain)
if raw:
return raw_output
else:
return _process(raw_output)
return raw_output if raw else _process(raw_output)

View File

@@ -1,46 +0,0 @@
"""jc - JSON Convert ISO 8601 Datetime string parser
This parser has been renamed to datetime-iso (cli) or datetime_iso (module).
This parser will be removed in a future version, so please start using
the new parser name.
"""
from jc.parsers import datetime_iso
import jc.utils
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.1'
description = 'Deprecated - please use datetime-iso'
author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com'
details = 'Deprecated - please use datetime-iso'
compatible = ['linux', 'aix', 'freebsd', 'darwin', 'win32', 'cygwin']
tags = ['standard', 'string']
deprecated = True
__version__ = info.version
def parse(data, raw=False, quiet=False):
"""
This parser is deprecated and calls datetime_iso. Please use datetime_iso
directly. This parser will be removed in the future.
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
Dictionary. Raw or processed structured data.
"""
jc.utils.warning_message([
'iso-datetime parser is deprecated. Please use datetime-iso instead.'
])
return datetime_iso.parse(data, raw=raw, quiet=quiet)

View File

@@ -1,6 +1,6 @@
"""jc - JSON Convert `last` and `lastb` command output parser
Supports `-w` and `-F` options.
Supports `-w`, `-F`, and `-x` options.
Calculated epoch time fields are naive (i.e. based on the local time of the
system the parser is run on) since there is no timezone information in the
@@ -103,10 +103,15 @@ Examples:
import re
import jc.utils
DATE_RE = re.compile(r'[MTWFS][ouerha][nedritnu] [JFMASOND][aepuco][nbrynlgptvc]')
LAST_F_DATE_RE = re.compile(r'\d\d:\d\d:\d\d \d\d\d\d')
LOGIN_LOGOUT_EPOCH_RE = re.compile(r'.*\d\d:\d\d:\d\d \d\d\d\d.*')
LOGOUT_IGNORED_EVENTS = ['down', 'crash']
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.8'
version = '1.9'
description = '`last` and `lastb` command parser'
author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com'
@@ -138,9 +143,6 @@ def _process(proc_data):
if 'tty' in entry and entry['tty'] == '~':
entry['tty'] = None
if 'tty' in entry and entry['tty'] == 'system_boot':
entry['tty'] = 'system boot'
if 'hostname' in entry and entry['hostname'] == '-':
entry['hostname'] = None
@@ -153,11 +155,11 @@ def _process(proc_data):
if 'logout' in entry and entry['logout'] == 'gone_-_no_logout':
entry['logout'] = 'gone - no logout'
if 'login' in entry and re.match(r'.*\d\d:\d\d:\d\d \d\d\d\d.*', entry['login']):
if 'login' in entry and LOGIN_LOGOUT_EPOCH_RE.match(entry['login']):
timestamp = jc.utils.timestamp(entry['login'])
entry['login_epoch'] = timestamp.naive
if 'logout' in entry and re.match(r'.*\d\d:\d\d:\d\d \d\d\d\d.*', entry['logout']):
if 'logout' in entry and LOGIN_LOGOUT_EPOCH_RE.match(entry['logout']):
timestamp = jc.utils.timestamp(entry['logout'])
entry['logout_epoch'] = timestamp.naive
@@ -194,66 +196,71 @@ def parse(data, raw=False, quiet=False):
# Clear any blank lines
cleandata = list(filter(None, data.splitlines()))
if jc.utils.has_data(data):
if not jc.utils.has_data(data):
return []
for entry in cleandata:
output_line = {}
for entry in cleandata:
output_line = {}
if (entry.startswith('wtmp begins ') or
entry.startswith('btmp begins ') or
entry.startswith('utx.log begins ')):
if any(
entry.startswith(f'{prefix} begins ')
for prefix in ['wtmp', 'btmp', 'utx.log']
):
continue
continue
entry = entry.replace('boot time', 'boot_time')
entry = entry.replace(' still logged in', '- still_logged_in')
entry = entry.replace(' gone - no logout', '- gone_-_no_logout')
entry = entry.replace('system boot', 'system_boot')
entry = entry.replace('boot time', 'boot_time')
entry = entry.replace(' still logged in', '- still_logged_in')
entry = entry.replace(' gone - no logout', '- gone_-_no_logout')
linedata = entry.split()
linedata = entry.split()
if re.match(r'[MTWFS][ouerha][nedritnu] [JFMASOND][aepuco][nbrynlgptvc]', ' '.join(linedata[2:4])):
linedata.insert(2, '-')
# Adding "-" before the date part.
if DATE_RE.match(' '.join(linedata[2:4])):
linedata.insert(2, '-')
# freebsd fix
if linedata[0] == 'boot_time':
linedata.insert(1, '-')
linedata.insert(1, '~')
# freebsd fix
if linedata[0] == 'boot_time':
linedata.insert(1, '-')
linedata.insert(1, '~')
output_line['user'] = linedata[0]
output_line['tty'] = linedata[1]
output_line['hostname'] = linedata[2]
output_line['user'] = linedata[0]
# last -F support
if re.match(r'\d\d:\d\d:\d\d \d\d\d\d', ' '.join(linedata[6:8])):
output_line['login'] = ' '.join(linedata[3:8])
# Fix for last -x (runlevel).
if output_line['user'] == 'runlevel' and linedata[1] == '(to':
linedata[1] += f' {linedata.pop(2)} {linedata.pop(2)}'
elif output_line['user'] in ['reboot', 'shutdown'] and linedata[1] == 'system': # system down\system boot
linedata[1] += f' {linedata.pop(2)}'
if len(linedata) > 9 and linedata[9] != 'crash' and linedata[9] != 'down':
output_line['tty'] = linedata[1]
output_line['hostname'] = linedata[2]
# last -F support
if LAST_F_DATE_RE.match(' '.join(linedata[6:8])):
output_line['login'] = ' '.join(linedata[3:8])
if len(linedata) > 9:
if linedata[9] not in LOGOUT_IGNORED_EVENTS:
output_line['logout'] = ' '.join(linedata[9:14])
if len(linedata) > 9 and (linedata[9] == 'crash' or linedata[9] == 'down'):
else:
output_line['logout'] = linedata[9]
# add more items to the list to line up duration
linedata.insert(10, '-')
linedata.insert(10, '-')
linedata.insert(10, '-')
linedata.insert(10, '-')
for _ in range(4):
linedata.insert(10, '-')
if len(linedata) > 14:
output_line['duration'] = linedata[14].replace('(', '').replace(')', '')
if len(linedata) > 14:
output_line['duration'] = linedata[14].replace('(', '').replace(')', '')
else: # normal last support
output_line['login'] = ' '.join(linedata[3:7])
# normal last support
else:
output_line['login'] = ' '.join(linedata[3:7])
if len(linedata) > 8:
output_line['logout'] = linedata[8]
if len(linedata) > 8:
output_line['logout'] = linedata[8]
if len(linedata) > 9:
output_line['duration'] = linedata[9].replace('(', '').replace(')', '')
if len(linedata) > 9:
output_line['duration'] = linedata[9].replace('(', '').replace(')', '')
raw_output.append(output_line)
raw_output.append(output_line)
if raw:
return raw_output
else:
return _process(raw_output)
return _process(raw_output)

162
jc/parsers/lsattr.py Normal file
View File

@@ -0,0 +1,162 @@
"""jc - JSON Convert `lsattr` command output parser
Usage (cli):
$ lsattr | jc --lsattr
or
$ jc lsattr
Usage (module):
import jc
result = jc.parse('lsattr', lsattr_command_output)
Schema:
Information from https://github.com/mirror/busybox/blob/2d4a3d9e6c1493a9520b907e07a41aca90cdfd94/e2fsprogs/e2fs_lib.c#L40
used to define field names
[
{
"file": string,
"compressed_file": Optional[boolean],
"compressed_dirty_file": Optional[boolean],
"compression_raw_access": Optional[boolean],
"secure_deletion": Optional[boolean],
"undelete": Optional[boolean],
"synchronous_updates": Optional[boolean],
"synchronous_directory_updates": Optional[boolean],
"immutable": Optional[boolean],
"append_only": Optional[boolean],
"no_dump": Optional[boolean],
"no_atime": Optional[boolean],
"compression_requested": Optional[boolean],
"encrypted": Optional[boolean],
"journaled_data": Optional[boolean],
"indexed_directory": Optional[boolean],
"no_tailmerging": Optional[boolean],
"top_of_directory_hierarchies": Optional[boolean],
"extents": Optional[boolean],
"no_cow": Optional[boolean],
"casefold": Optional[boolean],
"inline_data": Optional[boolean],
"project_hierarchy": Optional[boolean],
"verity": Optional[boolean],
}
]
Examples:
$ sudo lsattr /etc/passwd | jc --lsattr
[
{
"file": "/etc/passwd",
"extents": true
}
]
"""
from typing import List, Dict
from jc.jc_types import JSONDictType
import jc.utils
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.0'
description = '`lsattr` command parser'
author = 'Mark Rotner'
author_email = 'rotner.mr@gmail.com'
compatible = ['linux']
magic_commands = ['lsattr']
tags = ['command']
__version__ = info.version
ERROR_PREFIX = "lsattr:"
# https://github.com/mirror/busybox/blob/2d4a3d9e6c1493a9520b907e07a41aca90cdfd94/e2fsprogs/e2fs_lib.c#L40
# https://github.com/landley/toybox/blob/f1682dc79fd75f64042b5438918fe5a507977e1c/toys/other/lsattr.c#L97
ATTRIBUTES = {
"B": "compressed_file",
"Z": "compressed_dirty_file",
"X": "compression_raw_access",
"s": "secure_deletion",
"u": "undelete",
"S": "synchronous_updates",
"D": "synchronous_directory_updates",
"i": "immutable",
"a": "append_only",
"d": "no_dump",
"A": "no_atime",
"c": "compression_requested",
"E": "encrypted",
"j": "journaled_data",
"I": "indexed_directory",
"t": "no_tailmerging",
"T": "top_of_directory_hierarchies",
"e": "extents",
"C": "no_cow",
"F": "casefold",
"N": "inline_data",
"P": "project_hierarchy",
"V": "verity",
}
def parse(
data: str,
raw: bool = False,
quiet: bool = False
) -> List[JSONDictType]:
"""
Main text parsing function
Parameters:
data: (string) text data to parse
quiet: (boolean) suppress warning messages if True
Returns:
List of Dictionaries. Raw or processed structured data.
"""
jc.utils.compatibility(__name__, info.compatible, quiet)
jc.utils.input_type_check(data)
output: List = []
cleandata = list(filter(None, data.splitlines()))
if not jc.utils.has_data(data):
return output
for line in cleandata:
# -R flag returns the output in the format:
# Folder:
# attributes file_in_folder
if line.endswith(':'):
continue
# lsattr: Operation not supported ....
if line.startswith(ERROR_PREFIX):
continue
line_output: Dict = {}
# attributes file
# --------------e----- /etc/passwd
attributes, file = line.split()
line_output['file'] = file
for attribute in list(attributes):
attribute_key = ATTRIBUTES.get(attribute)
if attribute_key:
line_output[attribute_key] = True
if line_output:
output.append(line_output)
return output

89
jc/parsers/lsb_release.py Normal file
View File

@@ -0,0 +1,89 @@
"""jc - JSON Convert `lsb_release` command parser
This parser is an alias to the Key/Value parser (`--kv`).
Usage (cli):
$ lsb_release -a | jc --lsb-release
or
$ jc lsb_release -a
Usage (module):
import jc
result = jc.parse('lsb_release', lsb_release_command_output)
Schema:
{
"<key>": string
}
Examples:
$ lsb_release -a | jc --lsb-release -p
{
"Distributor ID": "Ubuntu",
"Description": "Ubuntu 16.04.6 LTS",
"Release": "16.04",
"Codename": "xenial"
}
"""
from jc.jc_types import JSONDictType
import jc.parsers.kv
import jc.utils
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.0'
description = '`lsb_release` command parser'
author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com'
details = 'Using the Key/Value parser'
compatible = ['linux', 'darwin', 'cygwin', 'win32', 'aix', 'freebsd']
magic_commands = ['lsb_release']
tags = ['command']
__version__ = info.version
def _process(proc_data: JSONDictType) -> JSONDictType:
"""
Final processing to conform to the schema.
Parameters:
proc_data: (Dictionary) raw structured data to process
Returns:
Dictionary. Structured to conform to the schema.
"""
return jc.parsers.kv._process(proc_data)
def parse(
data: str,
raw: bool = False,
quiet: bool = False
) -> JSONDictType:
"""
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
Dictionary. Raw or processed structured data.
"""
jc.utils.compatibility(__name__, info.compatible, quiet)
raw_output = jc.parsers.kv.parse(data, raw, quiet)
return raw_output if raw else _process(raw_output)

View File

@@ -70,12 +70,14 @@ Example:
...
]
"""
import re
import jc.utils
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.8'
version = '1.9'
description = '`mount` command parser'
author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com'
@@ -133,14 +135,26 @@ def _linux_parse(data):
for entry in data:
output_line = {}
parsed_line = entry.split()
output_line['filesystem'] = parsed_line[0]
output_line['mount_point'] = parsed_line[2]
output_line['type'] = parsed_line[4]
output_line['options'] = parsed_line[5].lstrip('(').rstrip(')').split(',')
pattern = re.compile(
r'''
(?P<filesystem>\S+)\s+
on\s+
(?P<mount_point>.*?)\s+
type\s+
(?P<type>\S+)\s+
\((?P<options>.*?)\)\s*''',
re.VERBOSE)
output.append(output_line)
match = pattern.match(entry)
groups = match.groupdict()
if groups:
output_line['filesystem'] = groups["filesystem"]
output_line['mount_point'] = groups["mount_point"]
output_line['type'] = groups["type"]
output_line['options'] = groups["options"].split(',')
output.append(output_line)
return output
@@ -160,7 +174,7 @@ def _aix_parse(data):
# AIX mount entries have the remote node as the zeroth element. If the
# mount is local, the zeroth element is the filesystem instead. We can
# detect this by the lenth of the list. For local mounts, length is 7,
# detect this by the length of the list. For local mounts, length is 7,
# and for remote mounts, the length is 8. In the remote case, pop off
# the zeroth element. Then parsed_line has a consistent format.
if len(parsed_line) == 8:

View File

@@ -355,17 +355,18 @@ import jc.utils
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.13'
version = '1.15'
description = '`netstat` command parser'
author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com'
compatible = ['linux', 'darwin', 'freebsd']
compatible = ['linux', 'darwin', 'freebsd', 'win32']
magic_commands = ['netstat']
tags = ['command']
__version__ = info.version
WINDOWS_NETSTAT_HEADER = "Active Connections"
def _process(proc_data):
"""
@@ -450,9 +451,10 @@ def parse(data, raw=False, quiet=False):
import jc.parsers.netstat_freebsd_osx
raw_output = jc.parsers.netstat_freebsd_osx.parse(cleandata)
# use linux parser
else:
elif cleandata[0] == WINDOWS_NETSTAT_HEADER: # use windows parser.
import jc.parsers.netstat_windows
raw_output = jc.parsers.netstat_windows.parse(cleandata)
else: # use linux parser.
import jc.parsers.netstat_linux
raw_output = jc.parsers.netstat_linux.parse(cleandata)

View File

@@ -31,10 +31,19 @@ def normalize_interface_headers(header):
def parse_network(headers, entry):
LIST_OF_STATES = [
"ESTABLISHED", "SYN_SENT", "SYN_RECV", "FIN_WAIT1", "FIN_WAIT2",
"TIME_WAIT", "CLOSED", "CLOSE_WAIT", "LAST_ACK", "LISTEN", "CLOSING",
"UNKNOWN", "7"
]
# split entry based on presence of value in "State" column
contains_state = any(state in entry for state in LIST_OF_STATES)
split_modifier = 1 if contains_state else 2
entry = entry.split(maxsplit=len(headers) - split_modifier)
# Count words in header
# if len of line is one less than len of header, then insert None in field 5
entry = entry.split(maxsplit=len(headers) - 1)
if len(entry) == len(headers) - 1:
entry.insert(5, None)

View File

@@ -0,0 +1,75 @@
"""
jc - JSON Convert Windows `netstat` command output parser
"""
from typing import Dict, List
POSSIBLE_PROTOCOLS = ("TCP", "UDP", "TCPv6", "UDPv6")
def normalize_headers(headers: str):
"""
Normalizes the headers to match the jc netstat parser style
(local_address -> local_address, local_port...).
"""
headers = headers.lower().strip()
headers = headers.replace("local address", "local_address")
headers = headers.replace("foreign address", "foreign_address")
return headers.split()
def parse(cleandata: List[str]):
"""
Main text parsing function for Windows netstat
Parameters:
cleandata: (string) text data to parse
Returns:
List of Dictionaries. Raw structured data.
"""
raw_output = []
cleandata.pop(0) # Removing the "Active Connections" header.
headers = normalize_headers(cleandata.pop(0))
for line in cleandata:
line = line.strip()
if not line.startswith(POSSIBLE_PROTOCOLS): # -b.
line_data = raw_output.pop(len(raw_output) - 1)
line_data['program_name'] = line
raw_output.append(line_data)
continue
line_data = line.split()
line_data: Dict[str, str] = dict(zip(headers, line_data))
for key in list(line_data.keys()):
if key == "local_address":
local_address, local_port = line_data[key].rsplit(
":", maxsplit=1)
line_data["local_address"] = local_address
line_data["local_port"] = local_port
continue
if key == "foreign_address":
foreign_address, foreign_port = line_data[key].rsplit(
":", maxsplit=1)
line_data["foreign_address"] = foreign_address
line_data["foreign_port"] = foreign_port
continue
# There is no state in UDP, so the data after the "state" header will leak.
if key == "proto" and "state" in headers and line_data["proto"] == "UDP":
next_header = headers.index("state") + 1
if len(headers) > next_header:
next_header = headers[next_header]
line_data[next_header] = line_data["state"]
line_data["state"] = ''
raw_output.append(line_data)
return raw_output

236
jc/parsers/nsd_control.py Normal file
View File

@@ -0,0 +1,236 @@
"""jc - JSON Convert `nsd-control` command output parser
Usage (cli):
$ nsd-control | jc --nsd-control
or
$ jc nsd-control
Usage (module):
import jc
result = jc.parse('nsd_control', nsd_control_command_output)
Schema:
[
{
"version": string,
"verbosity": integer,
"ratelimit": integer
}
]
[
{
"zone": string
"status": {
"state": string,
"served-serial": string,
"commit-serial": string,
"wait": string
}
}
]
Examples:
$ nsd-control | jc --nsd-control status
[
{
"version": "4.6.2",
"verbosity": "2",
"ratelimit": "0"
}
]
$ nsd-control | jc --nsd-control zonestatus sunet.se
[
{
"zone": "sunet.se",
"status": {
"state": "ok",
"served-serial": "2023090704 since 2023-09-07T16:34:27",
"commit-serial": "2023090704 since 2023-09-07T16:34:27",
"wait": "28684 sec between attempts"
}
}
]
"""
from typing import List, Dict
import jc.utils
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.0'
description = '`nsd-control` command parser'
author = 'Pettai'
author_email = 'pettai@sunet.se'
compatible = ['linux', 'darwin', 'cygwin', 'win32', 'aix', 'freebsd']
tags = ['command']
magic_commands = ['nsd-control']
__version__ = info.version
def _process(proc_data):
"""
Final processing to conform to the schema.
Parameters:
proc_data: (List of Dictionaries) raw structured data to process
Returns:
List of Dictionaries. Structured to conform to the schema.
"""
int_list = {'verbosity', 'ratelimit', 'wait'}
for entry in proc_data:
for key in entry:
if key in int_list:
entry[key] = jc.utils.convert_to_int(entry[key])
return proc_data
def parse(data: str, raw: bool = False, quiet: bool = False):
"""
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of Dictionaries. Raw or processed structured data.
"""
jc.utils.compatibility(__name__, info.compatible, quiet)
jc.utils.input_type_check(data)
raw_output: List[Dict] = []
if jc.utils.has_data(data):
itrparse = False
itr: Dict = {}
for line in filter(None, data.splitlines()):
line = line.strip()
# default 'ok'
if line.startswith('ok'):
raw_output.append({'command': 'ok'})
continue
# status
if line.startswith('version:'):
status = {}
linedata = line.split(':', maxsplit=1)
version = linedata[1].strip()
status.update({'version': version})
continue
if line.startswith('verbosity:'):
linedata = line.split(':', maxsplit=1)
verbosity = linedata[1]
status.update({'verbosity': verbosity})
continue
if line.startswith('ratelimit:'):
linedata = line.split(':', maxsplit=1)
ratelimit = linedata[1]
status.update({'ratelimit': ratelimit})
raw_output.append(status)
continue
# print_cookie_secrets
if line.startswith('active'):
itrparse = True
itr = {}
linedata = line.split(':', maxsplit=1)
active = linedata[1].strip()
itr.update({'active': active})
continue
if line.startswith('staging'):
linedata = line.split(':', maxsplit=1)
staging = linedata[1].strip()
itr.update({'staging': staging})
continue
# print_tsig
if line.startswith('key:'):
tsigs = {}
tsigdata = dict()
linedata = line.split(' ', maxsplit=6)
name = linedata[2].strip('"').rstrip('"')
tsigdata.update({'name': name})
secret = linedata[4].strip('"').rstrip('"')
tsigdata.update({'secret': secret})
algorithm = linedata[6].strip('"').rstrip('"')
tsigdata.update({'algorithm': algorithm})
tsigs.update({'key': tsigdata})
raw_output.append(tsigs)
continue
# zonestatus
if line.startswith('zone:'):
zonename: Dict = dict()
zstatus: Dict = dict()
linedata = line.split(':\t', maxsplit=1)
zone = linedata[1]
zonename.update({'zone': zone})
continue
if line.startswith('state:'):
linedata = line.split(': ', maxsplit=1)
state = linedata[1]
zstatus.update({'state': state})
continue
if line.startswith('served-serial:'):
linedata = line.split(': ', maxsplit=1)
served = linedata[1].strip('"').rstrip('"')
zstatus.update({'served-serial': served})
continue
if line.startswith('commit-serial:'):
linedata = line.split(': ', maxsplit=1)
commit = linedata[1].strip('"').rstrip('"')
zstatus.update({'commit-serial': commit})
continue
if line.startswith('wait:'):
linedata = line.split(': ', maxsplit=1)
wait = linedata[1].strip('"').rstrip('"')
zstatus.update({'wait': wait})
zonename.update({'status': zstatus})
raw_output.append(zonename)
continue
# stats
if line.startswith('server') or line.startswith('num.') or line.startswith('size.') or line.startswith('time.') or line.startswith('zone.'):
itrparse = True
linedata = line.split('=', maxsplit=1)
key = linedata[0]
if key.startswith('time.'):
value = float(linedata[1])
else:
value = int(linedata[1])
itr.update({key: value})
continue
if itrparse:
raw_output.append(itr)
return raw_output if raw else _process(raw_output)

113
jc/parsers/os_release.py Normal file
View File

@@ -0,0 +1,113 @@
"""jc - JSON Convert `/etc/os-release` file parser
This parser is an alias to the Key/Value parser (`--kv`).
Usage (cli):
$ cat /etc/os-release | jc --os-release
Usage (module):
import jc
result = jc.parse('os_release', os_release_output)
Schema:
{
"<key>": string
}
Examples:
$ cat /etc/os-release | jc --os-release -p
{
"NAME": "CentOS Linux",
"VERSION": "7 (Core)",
"ID": "centos",
"ID_LIKE": "rhel fedora",
"VERSION_ID": "7",
"PRETTY_NAME": "CentOS Linux 7 (Core)",
"ANSI_COLOR": "0;31",
"CPE_NAME": "cpe:/o:centos:centos:7",
"HOME_URL": "https://www.centos.org/",
"BUG_REPORT_URL": "https://bugs.centos.org/",
"CENTOS_MANTISBT_PROJECT": "CentOS-7",
"CENTOS_MANTISBT_PROJECT_VERSION": "7",
"REDHAT_SUPPORT_PRODUCT": "centos",
"REDHAT_SUPPORT_PRODUCT_VERSION": "7"
}
$ cat /etc/os-release | jc --os-release -p -r
{
"NAME": "\\"CentOS Linux\\"",
"VERSION": "\\"7 (Core)\\"",
"ID": "\\"centos\\"",
"ID_LIKE": "\\"rhel fedora\\"",
"VERSION_ID": "\\"7\\"",
"PRETTY_NAME": "\\"CentOS Linux 7 (Core)\\"",
"ANSI_COLOR": "\\"0;31\\"",
"CPE_NAME": "\\"cpe:/o:centos:centos:7\\"",
"HOME_URL": "\\"https://www.centos.org/\\"",
"BUG_REPORT_URL": "\\"https://bugs.centos.org/\\"",
"CENTOS_MANTISBT_PROJECT": "\\"CentOS-7\\"",
"CENTOS_MANTISBT_PROJECT_VERSION": "\\"7\\"",
"REDHAT_SUPPORT_PRODUCT": "\\"centos\\"",
"REDHAT_SUPPORT_PRODUCT_VERSION": "\\"7\\""
}
"""
from jc.jc_types import JSONDictType
import jc.parsers.kv
import jc.utils
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.0'
description = '`/etc/os-release` file parser'
author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com'
details = 'Using the Key/Value parser'
compatible = ['linux', 'darwin', 'cygwin', 'win32', 'aix', 'freebsd']
tags = ['file', 'standard', 'string']
__version__ = info.version
def _process(proc_data: JSONDictType) -> JSONDictType:
"""
Final processing to conform to the schema.
Parameters:
proc_data: (Dictionary) raw structured data to process
Returns:
Dictionary. Structured to conform to the schema.
"""
return jc.parsers.kv._process(proc_data)
def parse(
data: str,
raw: bool = False,
quiet: bool = False
) -> JSONDictType:
"""
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
Dictionary. Raw or processed structured data.
"""
jc.utils.compatibility(__name__, info.compatible, quiet)
raw_output = jc.parsers.kv.parse(data, raw, quiet)
return raw_output if raw else _process(raw_output)

View File

@@ -28,12 +28,12 @@
# OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
# OF THE POSSIBILITY OF SUCH DAMAGE.
import sys
import string
if sys.version_info >= (3, 0):
def unichr(character): # pylint: disable=redefined-builtin
return chr(character)
def unichr(character): # pylint: disable=redefined-builtin
return chr(character)
def ConvertNEXTSTEPToUnicode(hex_digits):
# taken from http://ftp.unicode.org/Public/MAPPINGS/VENDORS/NEXT/NEXTSTEP.TXT

View File

@@ -64,12 +64,10 @@ def GetFileEncoding(path):
def OpenFileWithEncoding(file_path, encoding):
return codecs.open(file_path, 'r', encoding=encoding, errors='ignore')
if sys.version_info < (3, 0):
def OpenFile(file_path):
return open(file_path, 'rb')
else:
def OpenFile(file_path):
return open(file_path, 'br')
def OpenFile(file_path):
return open(file_path, 'rb')
class PBParser(object):

View File

@@ -32,7 +32,7 @@ import sys
from functools import cmp_to_key
# for python 3.10+ compatibility
if sys.version_info.major == 3 and sys.version_info.minor >= 10:
if sys.version_info >= (3, 10):
import collections
setattr(collections, "MutableMapping", collections.abc.MutableMapping)

View File

@@ -40,6 +40,9 @@ Schema:
"kb_ccwr_s": float,
"cswch_s": float,
"nvcswch_s": float,
"usr_ms": integer,
"system_ms": integer,
"guest_ms": integer,
"command": string
}
]
@@ -128,7 +131,7 @@ from jc.exceptions import ParseError
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.1'
version = '1.3'
description = '`pidstat -H` command parser'
author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com'
@@ -152,11 +155,16 @@ def _process(proc_data: List[Dict]) -> List[Dict]:
List of Dictionaries. Structured to conform to the schema.
"""
int_list = {'time', 'uid', 'pid', 'cpu', 'vsz', 'rss', 'stksize', 'stkref'}
int_list = {
'time', 'uid', 'pid', 'cpu', 'vsz', 'rss', 'stksize', 'stkref',
'usr_ms', 'system_ms', 'guest_ms'
}
float_list = {'percent_usr', 'percent_system', 'percent_guest', 'percent_cpu',
'minflt_s', 'majflt_s', 'percent_mem', 'kb_rd_s', 'kb_wr_s',
'kb_ccwr_s', 'cswch_s', 'nvcswch_s'}
float_list = {
'percent_usr', 'percent_system', 'percent_guest', 'percent_cpu',
'minflt_s', 'majflt_s', 'percent_mem', 'kb_rd_s', 'kb_wr_s',
'kb_ccwr_s', 'cswch_s', 'nvcswch_s', 'percent_wait'
}
for entry in proc_data:
for key in entry:
@@ -169,6 +177,14 @@ def _process(proc_data: List[Dict]) -> List[Dict]:
return proc_data
def normalize_header(header: str) -> str:
return header.replace('#', ' ')\
.replace('-', '_')\
.replace('/', '_')\
.replace('%', 'percent_')\
.lower()
def parse(
data: str,
raw: bool = False,
@@ -191,29 +207,28 @@ def parse(
jc.utils.input_type_check(data)
raw_output: List = []
table_list: List = []
header_found = False
if jc.utils.has_data(data):
# check for line starting with # as the start of the table
data_list = list(filter(None, data.splitlines()))
for line in data_list.copy():
if line.startswith('#'):
break
else:
data_list.pop(0)
if not data_list:
for line in data_list:
if line.startswith('#'):
header_found = True
if len(table_list) > 1:
raw_output.extend(simple_table_parse(table_list))
table_list = [normalize_header(line)]
continue
if header_found:
table_list.append(line)
if len(table_list) > 1:
raw_output.extend(simple_table_parse(table_list))
if not header_found:
raise ParseError('Could not parse pidstat output. Make sure to use "pidstat -h".')
# normalize header
data_list[0] = data_list[0].replace('#', ' ')\
.replace('/', '_')\
.replace('%', 'percent_')\
.lower()
# remove remaining header lines (e.g. pidstat -H 2 5)
data_list = [i for i in data_list if not i.startswith('#')]
raw_output = simple_table_parse(data_list)
return raw_output if raw else _process(raw_output)

View File

@@ -34,6 +34,7 @@ Schema:
"percent_usr": float,
"percent_system": float,
"percent_guest": float,
"percent_wait": float,
"percent_cpu": float,
"cpu": integer,
"minflt_s": float,
@@ -48,6 +49,9 @@ Schema:
"kb_ccwr_s": float,
"cswch_s": float,
"nvcswch_s": float,
"usr_ms": integer,
"system_ms": integer,
"guest_ms": integer,
"command": string,
# below object only exists if using -qq or ignore_exceptions=True
@@ -72,7 +76,7 @@ Examples:
{"time":"1646859134","uid":"0","pid":"9","percent_usr":"0.00","perc...}
...
"""
from typing import Dict, Iterable, Union
from typing import List, Dict, Iterable, Union
import jc.utils
from jc.streaming import (
add_jc_meta, streaming_input_type_check, streaming_line_input_type_check, raise_or_yield
@@ -83,7 +87,7 @@ from jc.exceptions import ParseError
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.1'
version = '1.2'
description = '`pidstat -H` command streaming parser'
author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com'
@@ -107,11 +111,16 @@ def _process(proc_data: Dict) -> Dict:
Dictionary. Structured data to conform to the schema.
"""
int_list = {'time', 'uid', 'pid', 'cpu', 'vsz', 'rss', 'stksize', 'stkref'}
int_list = {
'time', 'uid', 'pid', 'cpu', 'vsz', 'rss', 'stksize', 'stkref',
'usr_ms', 'system_ms', 'guest_ms'
}
float_list = {'percent_usr', 'percent_system', 'percent_guest', 'percent_cpu',
'minflt_s', 'majflt_s', 'percent_mem', 'kb_rd_s', 'kb_wr_s',
'kb_ccwr_s', 'cswch_s', 'nvcswch_s'}
float_list = {
'percent_usr', 'percent_system', 'percent_guest', 'percent_wait',
'percent_cpu', 'minflt_s', 'majflt_s', 'percent_mem', 'kb_rd_s',
'kb_wr_s', 'kb_ccwr_s', 'cswch_s', 'nvcswch_s'
}
for key in proc_data:
if key in int_list:
@@ -123,6 +132,14 @@ def _process(proc_data: Dict) -> Dict:
return proc_data
def normalize_header(header: str) -> str:
return header.replace('#', ' ')\
.replace('-', '_')\
.replace('/', '_')\
.replace('%', 'percent_')\
.lower()
@add_jc_meta
def parse(
data: Iterable[str],
@@ -149,8 +166,8 @@ def parse(
jc.utils.compatibility(__name__, info.compatible, quiet)
streaming_input_type_check(data)
found_first_hash = False
header = ''
table_list: List = []
header: str = ''
for line in data:
try:
@@ -161,29 +178,30 @@ def parse(
# skip blank lines
continue
if not line.startswith('#') and not found_first_hash:
# skip preamble lines before header row
if line.startswith('#'):
if len(table_list) > 1:
output_line = simple_table_parse(table_list)[0]
yield output_line if raw else _process(output_line)
header = ''
header = normalize_header(line)
table_list = [header]
continue
if line.startswith('#') and not found_first_hash:
# normalize header
header = line.replace('#', ' ')\
.replace('/', '_')\
.replace('%', 'percent_')\
.lower()
found_first_hash = True
continue
if line.startswith('#') and found_first_hash:
# skip header lines after first one is found
continue
output_line = simple_table_parse([header, line])[0]
if output_line:
if header:
table_list.append(line)
output_line = simple_table_parse(table_list)[0]
yield output_line if raw else _process(output_line)
else:
raise ParseError('Not pidstat data')
table_list = [header]
continue
except Exception as e:
yield raise_or_yield(ignore_exceptions, e, line)
try:
if len(table_list) > 1:
output_line = simple_table_parse(table_list)[0]
yield output_line if raw else _process(output_line)
except Exception as e:
yield raise_or_yield(ignore_exceptions, e, str(table_list))

View File

@@ -30,6 +30,8 @@ Schema:
"packets_received": integer,
"packet_loss_percent": float,
"duplicates": integer,
"errors": integer,
"corrupted": integer,
"round_trip_ms_min": float,
"round_trip_ms_avg": float,
"round_trip_ms_max": float,
@@ -157,6 +159,7 @@ Examples:
]
}
"""
import re
import string
import ipaddress
import jc.utils
@@ -164,7 +167,7 @@ import jc.utils
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.8'
version = '1.10'
description = '`ping` and `ping6` command parser'
author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com'
@@ -190,7 +193,7 @@ def _process(proc_data):
"""
int_list = {
'data_bytes', 'packets_transmitted', 'packets_received', 'bytes', 'icmp_seq', 'ttl',
'duplicates', 'vr', 'hl', 'tos', 'len', 'id', 'flg', 'off', 'pro', 'cks'
'duplicates', 'corrupted', 'errors', 'vr', 'hl', 'tos', 'len', 'id', 'flg', 'off', 'pro', 'cks'
}
float_list = {
@@ -284,6 +287,8 @@ def _linux_parse(data):
if ipv4 and linedata[0][5] not in string.digits:
hostname = True
# fixup for missing hostname
linedata[0] = linedata[0][:5] + 'nohost' + linedata[0][5:]
elif ipv4 and linedata[0][5] in string.digits:
hostname = False
elif not ipv4 and ' (' in linedata[0]:
@@ -314,45 +319,52 @@ def _linux_parse(data):
if line.startswith('---'):
footer = True
raw_output['destination'] = line.split()[1]
if line[4] != ' ': # fixup for missing hostname
raw_output['destination'] = line.split()[1]
continue
if footer:
if 'packets transmitted' in line:
if ' duplicates,' in line:
raw_output.update(
{
'packets_transmitted': line.split()[0],
'packets_received': line.split()[3],
'packet_loss_percent': line.split()[7].rstrip('%'),
'duplicates': line.split()[5].lstrip('+'),
'time_ms': line.split()[11].replace('ms', '')
}
)
continue
else:
raw_output.update(
{
'packets_transmitted': line.split()[0],
'packets_received': line.split()[3],
'packet_loss_percent': line.split()[5].rstrip('%'),
'duplicates': '0',
'time_ms': line.split()[9].replace('ms', '')
}
)
continue
# Init in zero, to keep compatibility with previous behaviour
if 'duplicates' not in raw_output:
raw_output['duplicates'] = '0'
else:
split_line = line.split(' = ')[1]
split_line = split_line.split('/')
raw_output.update(
{
'round_trip_ms_min': split_line[0],
'round_trip_ms_avg': split_line[1],
'round_trip_ms_max': split_line[2],
'round_trip_ms_stddev': split_line[3].split()[0]
}
)
#
# See: https://github.com/dgibson/iputils/blob/master/ping_common.c#L995
#
m = re.search(r'(\d+) packets transmitted', line)
if m:
raw_output['packets_transmitted'] = m.group(1)
m = re.search(r'(\d+) received,', line)
if m:
raw_output['packets_received'] = m.group(1)
m = re.search(r'[+](\d+) duplicates', line)
if m:
raw_output['duplicates'] = m.group(1)
m = re.search(r'[+](\d+) corrupted', line)
if m:
raw_output['corrupted'] = m.group(1)
m = re.search(r'[+](\d+) errors', line)
if m:
raw_output['errors'] = m.group(1)
m = re.search(r'([\d\.]+)% packet loss', line)
if m:
raw_output['packet_loss_percent'] = m.group(1)
m = re.search(r'time (\d+)ms', line)
if m:
raw_output['time_ms'] = m.group(1)
m = re.search(r'rtt min\/avg\/max\/mdev += +([\d\.]+)\/([\d\.]+)\/([\d\.]+)\/([\d\.]+) ms', line)
if m:
raw_output['round_trip_ms_min'] = m.group(1)
raw_output['round_trip_ms_avg'] = m.group(2)
raw_output['round_trip_ms_max'] = m.group(3)
raw_output['round_trip_ms_stddev'] = m.group(4)
# ping response lines
else:

View File

@@ -31,7 +31,7 @@ Schema:
"source_ip": string,
"destination_ip": string,
"sent_bytes": integer,
"pattern": string, # (null if not set)
"pattern": string, # null if not set
"destination": string,
"timestamp": float,
"response_bytes": integer,
@@ -44,10 +44,12 @@ Schema:
"packets_received": integer,
"packet_loss_percent": float,
"duplicates": integer,
"round_trip_ms_min": float,
"round_trip_ms_avg": float,
"round_trip_ms_max": float,
"round_trip_ms_stddev": float,
"errors": integer, # null if not set
"corrupted": integer, # null if not set
"round_trip_ms_min": float, # null if not set
"round_trip_ms_avg": float, # null if not set
"round_trip_ms_max": float, # null if not set
"round_trip_ms_stddev": float, # null if not set
# below object only exists if using -qq or ignore_exceptions=True
"_jc_meta": {
@@ -74,6 +76,7 @@ Examples:
{"type":"reply","destination_ip":"1.1.1.1","sent_bytes":"56","patte...}
...
"""
import re
import string
import ipaddress
import jc.utils
@@ -85,7 +88,7 @@ from jc.exceptions import ParseError
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.2'
version = '1.4'
description = '`ping` and `ping6` command streaming parser'
author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com'
@@ -110,13 +113,14 @@ def _process(proc_data):
Dictionary. Structured data to conform to the schema.
"""
int_list = {
'sent_bytes', 'packets_transmitted', 'packets_received', 'response_bytes', 'icmp_seq',
'ttl', 'duplicates', 'vr', 'hl', 'tos', 'len', 'id', 'flg', 'off', 'pro', 'cks'
'sent_bytes', 'packets_transmitted', 'packets_received',
'response_bytes', 'icmp_seq', 'ttl', 'duplicates', 'vr', 'hl', 'tos',
'len', 'id', 'flg', 'off', 'pro', 'cks', 'errors', 'corrupted'
}
float_list = {
'packet_loss_percent', 'round_trip_ms_min', 'round_trip_ms_avg', 'round_trip_ms_max',
'round_trip_ms_stddev', 'timestamp', 'time_ms'
'packet_loss_percent', 'round_trip_ms_min', 'round_trip_ms_avg',
'round_trip_ms_max', 'round_trip_ms_stddev', 'timestamp', 'time_ms'
}
for key in proc_data:
@@ -144,6 +148,12 @@ class _state:
packet_loss_percent = None
time_ms = None
duplicates = None
corrupted = None
errors = None
round_trip_ms_min = None
round_trip_ms_avg = None
round_trip_ms_max = None
round_trip_ms_stddev = None
def _ipv6_in(line):
@@ -340,6 +350,8 @@ def _linux_parse(line, s):
if s.ipv4 and line[5] not in string.digits:
s.hostname = True
# fixup for missing hostname
line = line[:5] + 'nohost' + line[5:]
elif s.ipv4 and line[5] in string.digits:
s.hostname = False
elif not s.ipv4 and ' (' in line:
@@ -367,24 +379,44 @@ def _linux_parse(line, s):
return None
if s.footer:
if 'packets transmitted' in line:
if ' duplicates,' in line:
s.packets_transmitted = line.split()[0]
s.packets_received = line.split()[3]
s.packet_loss_percent = line.split()[7].rstrip('%')
s.duplicates = line.split()[5].lstrip('+')
s.time_ms = line.split()[11].replace('ms', '')
return None
#
# See: https://github.com/dgibson/iputils/blob/master/ping_common.c#L995
#
m = re.search(r'(\d+) packets transmitted', line)
if m:
s.packets_transmitted = m.group(1)
s.packets_transmitted = line.split()[0]
s.packets_received = line.split()[3]
s.packet_loss_percent = line.split()[5].rstrip('%')
s.duplicates = '0'
s.time_ms = line.split()[9].replace('ms', '')
return None
m = re.search(r'(\d+) received,', line)
if m:
s.packets_received = m.group(1)
m = re.search(r'[+](\d+) duplicates', line)
if m:
s.duplicates = m.group(1)
m = re.search(r'[+](\d+) corrupted', line)
if m:
s.corrupted = m.group(1)
m = re.search(r'[+](\d+) errors', line)
if m:
s.errors = m.group(1)
m = re.search(r'([\d\.]+)% packet loss', line)
if m:
s.packet_loss_percent = m.group(1)
m = re.search(r'time (\d+)ms', line)
if m:
s.time_ms = m.group(1)
m = re.search(r'rtt min\/avg\/max\/mdev += +([\d\.]+)\/([\d\.]+)\/([\d\.]+)\/([\d\.]+) ms', line)
if m:
s.round_trip_ms_min = m.group(1)
s.round_trip_ms_avg = m.group(2)
s.round_trip_ms_max = m.group(3)
s.round_trip_ms_stddev = m.group(4)
split_line = line.split(' = ')[1]
split_line = split_line.split('/')
output_line = {
'type': 'summary',
'destination_ip': s.destination_ip or None,
@@ -392,15 +424,16 @@ def _linux_parse(line, s):
'pattern': s.pattern or None,
'packets_transmitted': s.packets_transmitted or None,
'packets_received': s.packets_received or None,
'packet_loss_percent': s.packet_loss_percent or None,
'duplicates': s.duplicates or None,
'time_ms': s.time_ms or None,
'round_trip_ms_min': split_line[0],
'round_trip_ms_avg': split_line[1],
'round_trip_ms_max': split_line[2],
'round_trip_ms_stddev': split_line[3].split()[0]
'packet_loss_percent': s.packet_loss_percent,
'duplicates': s.duplicates or '0',
'errors': s.errors,
'corrupted': s.corrupted,
'time_ms': s.time_ms,
'round_trip_ms_min': s.round_trip_ms_min,
'round_trip_ms_avg': s.round_trip_ms_avg,
'round_trip_ms_max': s.round_trip_ms_max,
'round_trip_ms_stddev': s.round_trip_ms_stddev
}
return output_line
# ping response lines
@@ -486,6 +519,7 @@ def parse(data, raw=False, quiet=False, ignore_exceptions=False):
streaming_input_type_check(data)
s = _state()
summary_obj = {}
for line in data:
try:
@@ -526,6 +560,12 @@ def parse(data, raw=False, quiet=False, ignore_exceptions=False):
if s.os_detected and s.linux:
output_line = _linux_parse(line, s)
# summary can be multiple lines so don't output until the end
if output_line:
if output_line.get('type', None) == 'summary':
summary_obj = output_line
continue
elif s.os_detected and s.bsd:
output_line = _bsd_parse(line, s)
@@ -540,3 +580,10 @@ def parse(data, raw=False, quiet=False, ignore_exceptions=False):
except Exception as e:
yield raise_or_yield(ignore_exceptions, e, line)
# yield summary, if it exists
try:
if summary_obj:
yield summary_obj if raw else _process(summary_obj)
except Exception as e:
yield raise_or_yield(ignore_exceptions, e, str(summary_obj))

220
jc/parsers/pkg_index_apk.py Normal file
View File

@@ -0,0 +1,220 @@
"""jc - JSON Convert Alpine Linux Package Index files
Usage (cli):
$ cat APKINDEX | jc --pkg-index-apk
Usage (module):
import jc
result = jc.parse('pkg_index_apk', pkg_index_apk_output)
Schema:
[
{
"checksum": string,
"package": string,
"version": string,
"architecture": string,
"package_size": integer,
"installed_size": integer,
"description": string,
"url": string,
"license": string,
"origin": string,
"maintainer": {
"name": string,
"email": string,
},
"build_time": integer,
"commit": string,
"provider_priority": string,
"dependencies": [
string
],
"provides": [
string
],
"install_if": [
string
],
}
]
Example:
$ cat APKINDEX | jc --pkg-index-apk
[
{
"checksum": "Q1znBl9k+RKgY6gl5Eg3iz73KZbLY=",
"package": "yasm",
"version": "1.3.0-r4",
"architecture": "x86_64",
"package_size": 772109,
"installed_size": 1753088,
"description": "A rewrite of NASM to allow for multiple synta...",
"url": "http://www.tortall.net/projects/yasm/",
"license": "BSD-2-Clause",
"origin": "yasm",
"maintainer": {
"name": "Natanael Copa",
"email": "ncopa@alpinelinux.org"
},
"build_time": 1681228881,
"commit": "84a227baf001b6e0208e3352b294e4d7a40e93de",
"dependencies": [
"so:libc.musl-x86_64.so.1"
],
"provides": [
"cmd:vsyasm=1.3.0-r4",
"cmd:yasm=1.3.0-r4",
"cmd:ytasm=1.3.0-r4"
]
}
]
$ cat APKINDEX | jc --pkg-index-apk --raw
[
{
"C": "Q1znBl9k+RKgY6gl5Eg3iz73KZbLY=",
"P": "yasm",
"V": "1.3.0-r4",
"A": "x86_64",
"S": "772109",
"I": "1753088",
"T": "A rewrite of NASM to allow for multiple syntax supported...",
"U": "http://www.tortall.net/projects/yasm/",
"L": "BSD-2-Clause",
"o": "yasm",
"m": "Natanael Copa <ncopa@alpinelinux.org>",
"t": "1681228881",
"c": "84a227baf001b6e0208e3352b294e4d7a40e93de",
"D": "so:libc.musl-x86_64.so.1",
"p": "cmd:vsyasm=1.3.0-r4 cmd:yasm=1.3.0-r4 cmd:ytasm=1.3.0-r4"
},
]
"""
import re
from typing import List, Dict, Union
import jc.utils
class info:
"""Provides parser metadata (version, author, etc.)"""
version = "1.0"
description = "Alpine Linux Package Index file parser"
author = "Roey Darwish Dror"
author_email = "roey.ghost@gmail.com"
compatible = ['linux', 'darwin', 'cygwin', 'win32', 'aix', 'freebsd']
tags = ['standard', 'file', 'string']
__version__ = info.version
_KEY = {
"C": "checksum",
"P": "package",
"V": "version",
"A": "architecture",
"S": "package_size",
"I": "installed_size",
"T": "description",
"U": "url",
"L": "license",
"o": "origin",
"m": "maintainer",
"t": "build_time",
"c": "commit",
"k": "provider_priority",
"D": "dependencies",
"p": "provides",
"i": "install_if"
}
def _value(key: str, value: str) -> Union[str, int, List[str], Dict[str, str]]:
"""
Convert value to the appropriate type
Parameters:
key: (string) key name
value: (string) value to convert
Returns:
Converted value
"""
if key in ['S', 'I', 't', 'k']:
return int(value)
if key in ['D', 'p', 'i']:
splitted = value.split(' ')
return splitted
if key == "m":
m = re.match(r'(.*) <(.*)>', value)
if m:
return {'name': m.group(1), 'email': m.group(2)}
else:
return {'name': value}
return value
def _process(proc_data: List[Dict]) -> List[Dict]:
"""
Final processing to conform to the schema.
Parameters:
proc_data: (List of Dictionaries) raw structured data to process
Returns:
List of Dictionaries. Structured to conform to the schema.
"""
return [{_KEY.get(k, k): _value(k, v) for k, v in d.items()} for d in proc_data]
def parse(data: str, raw: bool = False, quiet: bool = False) -> List[Dict]:
"""
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of Dictionaries. Raw or processed structured data.
"""
jc.utils.compatibility(__name__, info.compatible, quiet)
jc.utils.input_type_check(data)
raw_output: List[dict] = []
package: Dict = {}
if jc.utils.has_data(data):
lines = iter(data.splitlines())
for line in lines:
line = line.strip()
if not line:
if package:
raw_output.append(package)
package = {}
continue
key = line[0]
value = line[2:].strip()
assert key not in package
package[key] = value
if package:
raw_output.append(package)
return raw_output if raw else _process(raw_output)

148
jc/parsers/pkg_index_deb.py Normal file
View File

@@ -0,0 +1,148 @@
"""jc - JSON Convert Debian Package Index file parser
Usage (cli):
$ cat Packages | jc --pkg-index-deb
Usage (module):
import jc
result = jc.parse('pkg_index_deb', pkg_index_deb_output)
Schema:
[
{
"package": string,
"version": string,
"architecture": string,
"section": string,
"priority": string,
"installed_size": integer,
"maintainer": string,
"description": string,
"homepage": string,
"depends": string,
"conflicts": string,
"replaces": string,
"vcs_git": string,
"sha256": string,
"size": integer,
"vcs_git": string,
"filename": string
}
]
Examples:
$ cat Packages | jc --pkg-index-deb
[
{
"package": "aspnetcore-runtime-2.1",
"version": "2.1.22-1",
"architecture": "amd64",
"section": "devel",
"priority": "standard",
"installed_size": 71081,
"maintainer": "Microsoft <nugetaspnet@microsoft.com>",
"description": "Microsoft ASP.NET Core 2.1.22 Shared Framework",
"homepage": "https://www.asp.net/",
"depends": "libc6 (>= 2.14), dotnet-runtime-2.1 (>= 2.1.22)",
"sha256": "48d4e78a7ceff34105411172f4c3e91a0359b3929d84d26a493...",
"size": 21937036,
"filename": "pool/main/a/aspnetcore-runtime-2.1/aspnetcore-run..."
},
{
"package": "azure-functions-core-tools-4",
"version": "4.0.4590-1",
"architecture": "amd64",
"section": "devel",
"priority": "optional",
"maintainer": "Ahmed ElSayed <ahmels@microsoft.com>",
"description": "Azure Function Core Tools v4",
"homepage": "https://docs.microsoft.com/en-us/azure/azure-func...",
"conflicts": "azure-functions-core-tools-2, azure-functions-co...",
"replaces": "azure-functions-core-tools-2, azure-functions-cor...",
"vcs_git": "https://github.com/Azure/azure-functions-core-tool...",
"sha256": "a2a4f99d6d98ba0a46832570285552f2a93bab06cebbda2afc7...",
"size": 124417844,
"filename": "pool/main/a/azure-functions-core-tools-4/azure-fu..."
}
]
$ cat Packages | jc --pkg-index-deb -r
[
{
"package": "aspnetcore-runtime-2.1",
"version": "2.1.22-1",
"architecture": "amd64",
"section": "devel",
"priority": "standard",
"installed_size": "71081",
"maintainer": "Microsoft <nugetaspnet@microsoft.com>",
"description": "Microsoft ASP.NET Core 2.1.22 Shared Framework",
"homepage": "https://www.asp.net/",
"depends": "libc6 (>= 2.14), dotnet-runtime-2.1 (>= 2.1.22)",
"sha256": "48d4e78a7ceff34105411172f4c3e91a0359b3929d84d26a493...",
"size": "21937036",
"filename": "pool/main/a/aspnetcore-runtime-2.1/aspnetcore-run..."
},
{
"package": "azure-functions-core-tools-4",
"version": "4.0.4590-1",
"architecture": "amd64",
"section": "devel",
"priority": "optional",
"maintainer": "Ahmed ElSayed <ahmels@microsoft.com>",
"description": "Azure Function Core Tools v4",
"homepage": "https://docs.microsoft.com/en-us/azure/azure-func...",
"conflicts": "azure-functions-core-tools-2, azure-functions-co...",
"replaces": "azure-functions-core-tools-2, azure-functions-cor...",
"vcs_git": "https://github.com/Azure/azure-functions-core-tool...",
"sha256": "a2a4f99d6d98ba0a46832570285552f2a93bab06cebbda2afc7...",
"size": "124417844",
"filename": "pool/main/a/azure-functions-core-tools-4/azure-fu..."
}
]
"""
from typing import List
from jc.jc_types import JSONDictType
import jc.parsers.rpm_qi as rpm_qi
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.0'
description = 'Debian Package Index file parser'
author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com'
details = 'Using the rpm-qi parser'
compatible = ['linux', 'darwin', 'cygwin', 'win32', 'aix', 'freebsd']
tags = ['file']
__version__ = info.version
def parse(
data: str,
raw: bool = False,
quiet: bool = False
) -> List[JSONDictType]:
"""
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of Dictionaries. Raw or processed structured data.
"""
# This parser is an alias of rpm_qi.py
rpm_qi.info.compatible = ['linux', 'darwin', 'cygwin', 'win32', 'aix', 'freebsd']
rpm_qi.info.tags = ['file']
return rpm_qi.parse(data, raw, quiet)

Some files were not shown because too many files have changed in this diff Show More