mirror of
https://github.com/kellyjonbrazil/jc.git
synced 2026-04-03 17:44:07 +02:00
Compare commits
182 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
032cda8b3d | ||
|
|
6badd3fb1e | ||
|
|
724d825745 | ||
|
|
ff1e32ad2e | ||
|
|
a5f97febd3 | ||
|
|
5baa6cc865 | ||
|
|
7a4f30b843 | ||
|
|
b2c385dc4f | ||
|
|
5d5da8d33f | ||
|
|
e604571578 | ||
|
|
f9dacc3f95 | ||
|
|
6086920332 | ||
|
|
f52f3163bc | ||
|
|
d18ff73e88 | ||
|
|
1e5d602cae | ||
|
|
12912521ec | ||
|
|
842ea3a94b | ||
|
|
a8560dbc15 | ||
|
|
a65e27540a | ||
|
|
c3c5ed11e6 | ||
|
|
ce24149335 | ||
|
|
0314ca8c48 | ||
|
|
ebd8ee49a9 | ||
|
|
38d10c9781 | ||
|
|
360106c24d | ||
|
|
ca470a5d02 | ||
|
|
57f66e6b1d | ||
|
|
e774f67924 | ||
|
|
ac10e576c1 | ||
|
|
bcae0a99cd | ||
|
|
c73c2ff879 | ||
|
|
c39b1a3356 | ||
|
|
125dc2d9e0 | ||
|
|
b7d4ddc7ce | ||
|
|
f5e546c6fa | ||
|
|
928e39cd10 | ||
|
|
d0b7ea68a0 | ||
|
|
8444690133 | ||
|
|
c03c42d767 | ||
|
|
ab67688a00 | ||
|
|
5dcb7166da | ||
|
|
14697b86d7 | ||
|
|
4f4b6276d4 | ||
|
|
7bc497e129 | ||
|
|
68a37a6a5a | ||
|
|
6f5cd1d7c5 | ||
|
|
126b1b121c | ||
|
|
2341e456a0 | ||
|
|
72d80e95bb | ||
|
|
f5ec82440c | ||
|
|
c8e526ead3 | ||
|
|
066adfb764 | ||
|
|
5b444d4717 | ||
|
|
69c95adc8d | ||
|
|
2b0e0d8f5c | ||
|
|
778d1bacbf | ||
|
|
7e1b041016 | ||
|
|
313b9b329c | ||
|
|
6830062256 | ||
|
|
323072c982 | ||
|
|
8719d96bdd | ||
|
|
dd5d318ab5 | ||
|
|
d6dc7f5e65 | ||
|
|
c203664eb5 | ||
|
|
19ecf1fa19 | ||
|
|
b8deb0426c | ||
|
|
3b8371f020 | ||
|
|
20bb1cdf39 | ||
|
|
301daa48d0 | ||
|
|
8421ec8803 | ||
|
|
74211eb012 | ||
|
|
60bd42f298 | ||
|
|
14bdd74526 | ||
|
|
fb0f3eda04 | ||
|
|
91ee6e6701 | ||
|
|
51f4e6927c | ||
|
|
94988d8667 | ||
|
|
fe36f5a98c | ||
|
|
f9eb18b927 | ||
|
|
cc60f36748 | ||
|
|
604ade791f | ||
|
|
690ac52a91 | ||
|
|
34ed772775 | ||
|
|
d5ab95571f | ||
|
|
ffb3a0ee5f | ||
|
|
94b12b57aa | ||
|
|
6d149e8457 | ||
|
|
1ad89c90d8 | ||
|
|
fb71c7b020 | ||
|
|
28ed17ad3b | ||
|
|
0c2a4e2bf7 | ||
|
|
62bec30de2 | ||
|
|
3fced77e4e | ||
|
|
a09d1d8b76 | ||
|
|
8f4243fbd8 | ||
|
|
47aaf20549 | ||
|
|
0c5289ea50 | ||
|
|
3e53323514 | ||
|
|
a5ee9861b9 | ||
|
|
feb8ca7654 | ||
|
|
a7abe4473b | ||
|
|
780b9b61de | ||
|
|
19ace36ffa | ||
|
|
5fff8afc9f | ||
|
|
4ad230c927 | ||
|
|
dd98eb1ec8 | ||
|
|
c6baf42e72 | ||
|
|
e2bac97d56 | ||
|
|
d112ee94d0 | ||
|
|
27b21b2faf | ||
|
|
8c96d5cd20 | ||
|
|
c29ed3fd69 | ||
|
|
cedf603f12 | ||
|
|
279161c36f | ||
|
|
ce0b43d919 | ||
|
|
ddafa5bf06 | ||
|
|
bc7116c31b | ||
|
|
53b7092721 | ||
|
|
beb9174b1b | ||
|
|
aea41ed341 | ||
|
|
d789494cb1 | ||
|
|
608e7b4cff | ||
|
|
4ee199c02a | ||
|
|
fbf47d4085 | ||
|
|
5a238e4b42 | ||
|
|
f852b8246a | ||
|
|
88140d929a | ||
|
|
45f7268240 | ||
|
|
3a3c8e4d4a | ||
|
|
c1ac183a04 | ||
|
|
18bb779ee5 | ||
|
|
8b6612fe79 | ||
|
|
fde0bc8534 | ||
|
|
e661a78939 | ||
|
|
847e346602 | ||
|
|
b969751688 | ||
|
|
ad6f2ba03a | ||
|
|
63c6a5edc0 | ||
|
|
9f4cf9dd5e | ||
|
|
51331b6dc0 | ||
|
|
efb6761033 | ||
|
|
6a4f737a0f | ||
|
|
be6864b778 | ||
|
|
de3b91a36c | ||
|
|
ef5482c3b5 | ||
|
|
d20b795137 | ||
|
|
8a134065df | ||
|
|
22aee1bfa4 | ||
|
|
b282820fd6 | ||
|
|
3ee098306d | ||
|
|
09e8f379a6 | ||
|
|
69018cdb3a | ||
|
|
d0d7254c6a | ||
|
|
cc0f0971d7 | ||
|
|
2af61730f0 | ||
|
|
83f41b83dc | ||
|
|
1fb84fce88 | ||
|
|
a8837e1244 | ||
|
|
04d2eec558 | ||
|
|
1b57ec92f0 | ||
|
|
4d88595404 | ||
|
|
52b1272a3a | ||
|
|
d2ccad6a83 | ||
|
|
cad6dde4ac | ||
|
|
06811c3539 | ||
|
|
0cb23c2b21 | ||
|
|
ac4688dca2 | ||
|
|
326c3b4670 | ||
|
|
9b29d0c268 | ||
|
|
e0013c3871 | ||
|
|
a75744075b | ||
|
|
525aec1a02 | ||
|
|
0bf9a7a072 | ||
|
|
d8f2f4c95b | ||
|
|
35d733b44f | ||
|
|
9179b4175c | ||
|
|
bb07d78c78 | ||
|
|
07b179cd7f | ||
|
|
054422d837 | ||
|
|
3e052d1810 | ||
|
|
c8e72805cf | ||
|
|
12a80e7db0 |
@@ -1,5 +1,60 @@
|
||||
jc changelog
|
||||
|
||||
20200727 v1.13.0
|
||||
- Add ping and ping6 command parser tested on linux, macos, and freebsd
|
||||
- Add traceroute and traceroute6 command parser tested on linux, macos, and freebsd
|
||||
- Add tracepath command parser tested on linux
|
||||
- Update ini parser to support files only containing key/value pairs
|
||||
- Update uname parser exception with a hint to use "uname -a"
|
||||
- Update route parser to support IPv6 tables
|
||||
|
||||
20200711 v1.12.1
|
||||
- Fix tests when using older version of pygments library
|
||||
|
||||
20200710 v1.12.0
|
||||
- Add sysctl command parser tested on linux, macOS, and freebsd
|
||||
- Update the cli code to allow older versions of the pygments library (2.3.0) for debian packaging
|
||||
- Code cleanup on the cli
|
||||
- Add tests for the cli
|
||||
- Vendorize cgitb as tracebackplus for verbose debug messages
|
||||
|
||||
20200625 v1.11.8
|
||||
- Add verbose debug option using -dd argument
|
||||
|
||||
20200622 v1.11.7
|
||||
- Fix iptables parser issue which would not output the last chain
|
||||
|
||||
20200614 v1.11.6
|
||||
- Improve and standardize empty data check for all parsers
|
||||
|
||||
20200612 v1.11.5
|
||||
- Update airport_s parser to fix error on parsing empty data
|
||||
- Update arp parser to fix error on parsing empty data
|
||||
- Update blkid parser to fix error on parsing empty data
|
||||
- Update crontab parser to fix error on parsing empty data
|
||||
- Update crontab_u parser to fix error on parsing empty data
|
||||
- Update df parser to fix error on parsing empty data
|
||||
- Update free parser to fix error on parsing empty data
|
||||
- Update lsblk parser to fix error on parsing empty data
|
||||
- Update lsmod parser to fix error on parsing empty data
|
||||
- Update mount parser to fix error on parsing empty data
|
||||
- Update netstat parser to fix error on parsing empty data
|
||||
- Update ntpq parser to fix error on parsing empty data
|
||||
- Update ps parser to fix error on parsing empty data
|
||||
- Update route parser to fix error on parsing empty data
|
||||
- Update systemctl parser to fix error on parsing empty data
|
||||
- Update systemctl_lj parser to fix error on parsing empty data
|
||||
- Update systemctl_ls parser to fix error on parsing empty data
|
||||
- Update systemctl_luf parser to fix error on parsing empty data
|
||||
- Update uptime parser to fix error on parsing empty data
|
||||
- Update w parser to fix error on parsing empty data
|
||||
- Update xml parser to fix error on parsing empty data
|
||||
- Add tests to all parsers for no data condition
|
||||
- Update ss parser to fix integer fields
|
||||
|
||||
20200610 v1.11.4
|
||||
- Update ls parser to fix error on parsing an empty directory
|
||||
|
||||
20200609 v1.11.3
|
||||
- Add local parser plugin feature (contributed by Dean Serenevy)
|
||||
|
||||
|
||||
@@ -37,6 +37,7 @@ pydocmd simple jc.parsers.mount+ > ../docs/parsers/mount.md
|
||||
pydocmd simple jc.parsers.netstat+ > ../docs/parsers/netstat.md
|
||||
pydocmd simple jc.parsers.ntpq+ > ../docs/parsers/ntpq.md
|
||||
pydocmd simple jc.parsers.passwd+ > ../docs/parsers/passwd.md
|
||||
pydocmd simple jc.parsers.ping+ > ../docs/parsers/ping.md
|
||||
pydocmd simple jc.parsers.pip_list+ > ../docs/parsers/pip_list.md
|
||||
pydocmd simple jc.parsers.pip_show+ > ../docs/parsers/pip_show.md
|
||||
pydocmd simple jc.parsers.ps+ > ../docs/parsers/ps.md
|
||||
@@ -44,11 +45,14 @@ pydocmd simple jc.parsers.route+ > ../docs/parsers/route.md
|
||||
pydocmd simple jc.parsers.shadow+ > ../docs/parsers/shadow.md
|
||||
pydocmd simple jc.parsers.ss+ > ../docs/parsers/ss.md
|
||||
pydocmd simple jc.parsers.stat+ > ../docs/parsers/stat.md
|
||||
pydocmd simple jc.parsers.sysctl+ > ../docs/parsers/sysctl.md
|
||||
pydocmd simple jc.parsers.systemctl+ > ../docs/parsers/systemctl.md
|
||||
pydocmd simple jc.parsers.systemctl_lj+ > ../docs/parsers/systemctl_lj.md
|
||||
pydocmd simple jc.parsers.systemctl_ls+ > ../docs/parsers/systemctl_ls.md
|
||||
pydocmd simple jc.parsers.systemctl_luf+ > ../docs/parsers/systemctl_luf.md
|
||||
pydocmd simple jc.parsers.timedatectl+ > ../docs/parsers/timedatectl.md
|
||||
pydocmd simple jc.parsers.tracepath+ > ../docs/parsers/tracepath.md
|
||||
pydocmd simple jc.parsers.traceroute+ > ../docs/parsers/traceroute.md
|
||||
pydocmd simple jc.parsers.uname+ > ../docs/parsers/uname.md
|
||||
pydocmd simple jc.parsers.uptime+ > ../docs/parsers/uptime.md
|
||||
pydocmd simple jc.parsers.w+ > ../docs/parsers/w.md
|
||||
|
||||
@@ -3,7 +3,9 @@ jc - JSON CLI output utility INI Parser
|
||||
|
||||
Usage:
|
||||
|
||||
specify --ini as the first argument if the piped input is coming from an INI file
|
||||
Specify --ini as the first argument if the piped input is coming from an INI file or any
|
||||
simple key/value pair file. Delimiter can be '=' or ':'. Missing values are supported.
|
||||
Comment prefix can be '#' or ';'. Comments must be on their own line.
|
||||
|
||||
Compatibility:
|
||||
|
||||
@@ -61,11 +63,14 @@ Parameters:
|
||||
|
||||
Returns:
|
||||
|
||||
Dictionary representing an ini document:
|
||||
Dictionary representing an ini or simple key/value pair document:
|
||||
|
||||
{
|
||||
ini document converted to a dictionary
|
||||
see configparser standard library documentation for more details
|
||||
ini or key/value document converted to a dictionary - see configparser standard
|
||||
library documentation for more details.
|
||||
|
||||
Note: Values starting and ending with quotation marks will have the marks removed.
|
||||
If you would like to keep the quotation marks, use the -r or raw=True argument.
|
||||
}
|
||||
|
||||
## parse
|
||||
|
||||
171
docs/parsers/ping.md
Normal file
171
docs/parsers/ping.md
Normal file
@@ -0,0 +1,171 @@
|
||||
# jc.parsers.ping
|
||||
jc - JSON CLI output utility ping Parser
|
||||
|
||||
Usage:
|
||||
|
||||
specify --ping as the first argument if the piped input is coming from ping
|
||||
|
||||
Note: Use the ping -c (count) option, otherwise data will not be piped to jc.
|
||||
|
||||
Compatibility:
|
||||
|
||||
'linux', 'darwin', 'freebsd'
|
||||
|
||||
Examples:
|
||||
|
||||
$ ping -c 3 -p ff cnn.com | jc --ping -p
|
||||
{
|
||||
"destination_ip": "151.101.1.67",
|
||||
"data_bytes": 56,
|
||||
"pattern": "0xff",
|
||||
"destination": "cnn.com",
|
||||
"packets_transmitted": 3,
|
||||
"packets_received": 3,
|
||||
"packet_loss_percent": 0.0,
|
||||
"duplicates": 0,
|
||||
"round_trip_ms_min": 28.015,
|
||||
"round_trip_ms_avg": 32.848,
|
||||
"round_trip_ms_max": 39.376,
|
||||
"round_trip_ms_stddev": 4.79,
|
||||
"responses": [
|
||||
{
|
||||
"type": "reply",
|
||||
"bytes": 64,
|
||||
"response_ip": "151.101.1.67",
|
||||
"icmp_seq": 0,
|
||||
"ttl": 59,
|
||||
"time_ms": 28.015,
|
||||
"duplicate": false
|
||||
},
|
||||
{
|
||||
"type": "reply",
|
||||
"bytes": 64,
|
||||
"response_ip": "151.101.1.67",
|
||||
"icmp_seq": 1,
|
||||
"ttl": 59,
|
||||
"time_ms": 39.376,
|
||||
"duplicate": false
|
||||
},
|
||||
{
|
||||
"type": "reply",
|
||||
"bytes": 64,
|
||||
"response_ip": "151.101.1.67",
|
||||
"icmp_seq": 2,
|
||||
"ttl": 59,
|
||||
"time_ms": 31.153,
|
||||
"duplicate": false
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
$ ping -c 3 -p ff cnn.com | jc --ping -p -r
|
||||
{
|
||||
"destination_ip": "151.101.129.67",
|
||||
"data_bytes": "56",
|
||||
"pattern": "0xff",
|
||||
"destination": "cnn.com",
|
||||
"packets_transmitted": "3",
|
||||
"packets_received": "3",
|
||||
"packet_loss_percent": "0.0",
|
||||
"duplicates": "0",
|
||||
"round_trip_ms_min": "25.078",
|
||||
"round_trip_ms_avg": "29.543",
|
||||
"round_trip_ms_max": "32.553",
|
||||
"round_trip_ms_stddev": "3.221",
|
||||
"responses": [
|
||||
{
|
||||
"type": "reply",
|
||||
"bytes": "64",
|
||||
"response_ip": "151.101.129.67",
|
||||
"icmp_seq": "0",
|
||||
"ttl": "59",
|
||||
"time_ms": "25.078",
|
||||
"duplicate": false
|
||||
},
|
||||
{
|
||||
"type": "reply",
|
||||
"bytes": "64",
|
||||
"response_ip": "151.101.129.67",
|
||||
"icmp_seq": "1",
|
||||
"ttl": "59",
|
||||
"time_ms": "30.999",
|
||||
"duplicate": false
|
||||
},
|
||||
{
|
||||
"type": "reply",
|
||||
"bytes": "64",
|
||||
"response_ip": "151.101.129.67",
|
||||
"icmp_seq": "2",
|
||||
"ttl": "59",
|
||||
"time_ms": "32.553",
|
||||
"duplicate": false
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
## info
|
||||
```python
|
||||
info(self, /, *args, **kwargs)
|
||||
```
|
||||
|
||||
## process
|
||||
```python
|
||||
process(proc_data)
|
||||
```
|
||||
|
||||
Final processing to conform to the schema.
|
||||
|
||||
Parameters:
|
||||
|
||||
proc_data: (dictionary) raw structured data to process
|
||||
|
||||
Returns:
|
||||
|
||||
Dictionary. Structured data with the following schema:
|
||||
|
||||
{
|
||||
"source_ip": string,
|
||||
"destination_ip": string,
|
||||
"data_bytes": integer,
|
||||
"pattern": string, (null if not set)
|
||||
"destination": string,
|
||||
"packets_transmitted": integer,
|
||||
"packets_received": integer,
|
||||
"packet_loss_percent": float,
|
||||
"duplicates": integer,
|
||||
"round_trip_ms_min": float,
|
||||
"round_trip_ms_avg": float,
|
||||
"round_trip_ms_max": float,
|
||||
"round_trip_ms_stddev": float,
|
||||
"responses": [
|
||||
{
|
||||
"type": string, ('reply' or 'timeout')
|
||||
"timestamp": float,
|
||||
"bytes": integer,
|
||||
"response_ip": string,
|
||||
"icmp_seq": integer,
|
||||
"ttl": integer,
|
||||
"time_ms": float,
|
||||
"duplicate": boolean
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
## parse
|
||||
```python
|
||||
parse(data, raw=False, quiet=False)
|
||||
```
|
||||
|
||||
Main text parsing function
|
||||
|
||||
Parameters:
|
||||
|
||||
data: (string) text data to parse
|
||||
raw: (boolean) output preprocessed JSON if True
|
||||
quiet: (boolean) suppress warning messages if True
|
||||
|
||||
Returns:
|
||||
|
||||
Dictionary. Raw or processed structured data.
|
||||
|
||||
84
docs/parsers/sysctl.md
Normal file
84
docs/parsers/sysctl.md
Normal file
@@ -0,0 +1,84 @@
|
||||
# jc.parsers.sysctl
|
||||
jc - JSON CLI output utility sysctl -a Parser
|
||||
|
||||
Usage:
|
||||
|
||||
specify --sysctl as the first argument if the piped input is coming from sysctl -a
|
||||
|
||||
Note: since sysctl output is not easily parsable only a very simple key/value object
|
||||
will be output. An attempt is made to convert obvious integers and floats. If no
|
||||
conversion is desired, use the -r (raw) option.
|
||||
|
||||
Compatibility:
|
||||
|
||||
'linux', 'darwin', 'freebsd'
|
||||
|
||||
Examples:
|
||||
|
||||
$ sysctl | jc --sysctl -p
|
||||
{
|
||||
"user.cs_path": "/usr/bin:/bin:/usr/sbin:/sbin",
|
||||
"user.bc_base_max": 99,
|
||||
"user.bc_dim_max": 2048,
|
||||
"user.bc_scale_max": 99,
|
||||
"user.bc_string_max": 1000,
|
||||
"user.coll_weights_max": 2,
|
||||
"user.expr_nest_max": 32
|
||||
...
|
||||
}
|
||||
|
||||
$ sysctl | jc --sysctl -p -r
|
||||
{
|
||||
"user.cs_path": "/usr/bin:/bin:/usr/sbin:/sbin",
|
||||
"user.bc_base_max": "99",
|
||||
"user.bc_dim_max": "2048",
|
||||
"user.bc_scale_max": "99",
|
||||
"user.bc_string_max": "1000",
|
||||
"user.coll_weights_max": "2",
|
||||
"user.expr_nest_max": "32",
|
||||
...
|
||||
}
|
||||
|
||||
## info
|
||||
```python
|
||||
info(self, /, *args, **kwargs)
|
||||
```
|
||||
|
||||
## process
|
||||
```python
|
||||
process(proc_data)
|
||||
```
|
||||
|
||||
Final processing to conform to the schema.
|
||||
|
||||
Parameters:
|
||||
|
||||
proc_data: (dictionary) raw structured data to process
|
||||
|
||||
Returns:
|
||||
|
||||
Dictionary. Structured data with the following schema:
|
||||
|
||||
{
|
||||
"foo": string/integer/float, # best guess based on value
|
||||
"bar": string/integer/float,
|
||||
"baz": string/integer/float
|
||||
}
|
||||
|
||||
## parse
|
||||
```python
|
||||
parse(data, raw=False, quiet=False)
|
||||
```
|
||||
|
||||
Main text parsing function
|
||||
|
||||
Parameters:
|
||||
|
||||
data: (string) text data to parse
|
||||
raw: (boolean) output preprocessed JSON if True
|
||||
quiet: (boolean) suppress warning messages if True
|
||||
|
||||
Returns:
|
||||
|
||||
Dictionary. Raw or processed structured data.
|
||||
|
||||
158
docs/parsers/tracepath.md
Normal file
158
docs/parsers/tracepath.md
Normal file
@@ -0,0 +1,158 @@
|
||||
# jc.parsers.tracepath
|
||||
jc - JSON CLI output utility tracepath Parser
|
||||
|
||||
Usage:
|
||||
|
||||
specify --tracepath as the first argument if the piped input is coming from tracepath
|
||||
|
||||
Compatibility:
|
||||
|
||||
'linux'
|
||||
|
||||
Examples:
|
||||
|
||||
$ tracepath6 3ffe:2400:0:109::2 | jc --tracepath -p
|
||||
{
|
||||
"pmtu": 1480,
|
||||
"forward_hops": 2,
|
||||
"return_hops": 2,
|
||||
"hops": [
|
||||
{
|
||||
"ttl": 1,
|
||||
"guess": true,
|
||||
"host": "[LOCALHOST]",
|
||||
"reply_ms": null,
|
||||
"pmtu": 1500,
|
||||
"asymmetric_difference": null,
|
||||
"reached": false
|
||||
},
|
||||
{
|
||||
"ttl": 1,
|
||||
"guess": false,
|
||||
"host": "dust.inr.ac.ru",
|
||||
"reply_ms": 0.411,
|
||||
"pmtu": null,
|
||||
"asymmetric_difference": null,
|
||||
"reached": false
|
||||
},
|
||||
{
|
||||
"ttl": 2,
|
||||
"guess": false,
|
||||
"host": "dust.inr.ac.ru",
|
||||
"reply_ms": 0.39,
|
||||
"pmtu": 1480,
|
||||
"asymmetric_difference": 1,
|
||||
"reached": false
|
||||
},
|
||||
{
|
||||
"ttl": 2,
|
||||
"guess": false,
|
||||
"host": "3ffe:2400:0:109::2",
|
||||
"reply_ms": 463.514,
|
||||
"pmtu": null,
|
||||
"asymmetric_difference": null,
|
||||
"reached": true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
$ tracepath6 3ffe:2400:0:109::2 | jc --tracepath -p -r
|
||||
{
|
||||
"pmtu": "1480",
|
||||
"forward_hops": "2",
|
||||
"return_hops": "2",
|
||||
"hops": [
|
||||
{
|
||||
"ttl": "1",
|
||||
"guess": true,
|
||||
"host": "[LOCALHOST]",
|
||||
"reply_ms": null,
|
||||
"pmtu": "1500",
|
||||
"asymmetric_difference": null,
|
||||
"reached": false
|
||||
},
|
||||
{
|
||||
"ttl": "1",
|
||||
"guess": false,
|
||||
"host": "dust.inr.ac.ru",
|
||||
"reply_ms": "0.411",
|
||||
"pmtu": null,
|
||||
"asymmetric_difference": null,
|
||||
"reached": false
|
||||
},
|
||||
{
|
||||
"ttl": "2",
|
||||
"guess": false,
|
||||
"host": "dust.inr.ac.ru",
|
||||
"reply_ms": "0.390",
|
||||
"pmtu": "1480",
|
||||
"asymmetric_difference": "1",
|
||||
"reached": false
|
||||
},
|
||||
{
|
||||
"ttl": "2",
|
||||
"guess": false,
|
||||
"host": "3ffe:2400:0:109::2",
|
||||
"reply_ms": "463.514",
|
||||
"pmtu": null,
|
||||
"asymmetric_difference": null,
|
||||
"reached": true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
## info
|
||||
```python
|
||||
info(self, /, *args, **kwargs)
|
||||
```
|
||||
|
||||
## process
|
||||
```python
|
||||
process(proc_data)
|
||||
```
|
||||
|
||||
Final processing to conform to the schema.
|
||||
|
||||
Parameters:
|
||||
|
||||
proc_data: (dictionary) raw structured data to process
|
||||
|
||||
Returns:
|
||||
|
||||
Dictionary. Structured data with the following schema:
|
||||
|
||||
{
|
||||
"pmtu": integer,
|
||||
"forward_hops": integer,
|
||||
"return_hops": integer,
|
||||
"hops": [
|
||||
{
|
||||
"ttl": integer,
|
||||
"guess": boolean,
|
||||
"host": string,
|
||||
"reply_ms": float,
|
||||
"pmtu": integer,
|
||||
"asymmetric_difference": integer,
|
||||
"reached": boolean
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
## parse
|
||||
```python
|
||||
parse(data, raw=False, quiet=False)
|
||||
```
|
||||
|
||||
Main text parsing function
|
||||
|
||||
Parameters:
|
||||
|
||||
data: (string) text data to parse
|
||||
raw: (boolean) output preprocessed JSON if True
|
||||
quiet: (boolean) suppress warning messages if True
|
||||
|
||||
Returns:
|
||||
|
||||
Dictionary. Raw or processed structured data.
|
||||
|
||||
147
docs/parsers/traceroute.md
Normal file
147
docs/parsers/traceroute.md
Normal file
@@ -0,0 +1,147 @@
|
||||
# jc.parsers.traceroute
|
||||
jc - JSON CLI output utility traceroute Parser
|
||||
|
||||
Usage:
|
||||
|
||||
specify --traceroute as the first argument if the piped input is coming from traceroute
|
||||
|
||||
Note: on OSX and FreeBSD be sure to redirect STDERR to STDOUT since the header line is sent to STDERR
|
||||
e.g. $ traceroute 8.8.8.8 2>&1 | jc --traceroute
|
||||
|
||||
Compatibility:
|
||||
|
||||
'linux', 'darwin', 'freebsd'
|
||||
|
||||
Examples:
|
||||
|
||||
$ traceroute google.com | jc --traceroute -p
|
||||
{
|
||||
"destination_ip": "216.58.194.46",
|
||||
"destination_name": "google.com",
|
||||
"hops": [
|
||||
{
|
||||
"hop": 1,
|
||||
"probes": [
|
||||
{
|
||||
"annotation": null,
|
||||
"asn": null,
|
||||
"ip": "216.230.231.141",
|
||||
"name": "216-230-231-141.static.houston.tx.oplink.net",
|
||||
"rtt": 198.574
|
||||
},
|
||||
{
|
||||
"annotation": null,
|
||||
"asn": null,
|
||||
"ip": "216.230.231.141",
|
||||
"name": "216-230-231-141.static.houston.tx.oplink.net",
|
||||
"rtt": null
|
||||
},
|
||||
{
|
||||
"annotation": null,
|
||||
"asn": null,
|
||||
"ip": "216.230.231.141",
|
||||
"name": "216-230-231-141.static.houston.tx.oplink.net",
|
||||
"rtt": 198.65
|
||||
}
|
||||
]
|
||||
},
|
||||
...
|
||||
]
|
||||
}
|
||||
|
||||
$ traceroute google.com | jc --traceroute -p -r
|
||||
{
|
||||
"destination_ip": "216.58.194.46",
|
||||
"destination_name": "google.com",
|
||||
"hops": [
|
||||
{
|
||||
"hop": "1",
|
||||
"probes": [
|
||||
{
|
||||
"annotation": null,
|
||||
"asn": null,
|
||||
"ip": "216.230.231.141",
|
||||
"name": "216-230-231-141.static.houston.tx.oplink.net",
|
||||
"rtt": "198.574"
|
||||
},
|
||||
{
|
||||
"annotation": null,
|
||||
"asn": null,
|
||||
"ip": "216.230.231.141",
|
||||
"name": "216-230-231-141.static.houston.tx.oplink.net",
|
||||
"rtt": null
|
||||
},
|
||||
{
|
||||
"annotation": null,
|
||||
"asn": null,
|
||||
"ip": "216.230.231.141",
|
||||
"name": "216-230-231-141.static.houston.tx.oplink.net",
|
||||
"rtt": "198.650"
|
||||
}
|
||||
]
|
||||
},
|
||||
...
|
||||
]
|
||||
}
|
||||
|
||||
## info
|
||||
```python
|
||||
info(self, /, *args, **kwargs)
|
||||
```
|
||||
|
||||
## Hop
|
||||
```python
|
||||
Hop(self, idx)
|
||||
```
|
||||
|
||||
## process
|
||||
```python
|
||||
process(proc_data)
|
||||
```
|
||||
|
||||
Final processing to conform to the schema.
|
||||
|
||||
Parameters:
|
||||
|
||||
proc_data: (dictionary) raw structured data to process
|
||||
|
||||
Returns:
|
||||
|
||||
Dictionary. Structured data with the following schema:
|
||||
|
||||
{
|
||||
"destination_ip": string,
|
||||
"destination_name": string,
|
||||
"hops": [
|
||||
{
|
||||
"hop": integer,
|
||||
"probes": [
|
||||
{
|
||||
"annotation": string,
|
||||
"asn": integer,
|
||||
"ip": string,
|
||||
"name": string,
|
||||
"rtt": float
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
## parse
|
||||
```python
|
||||
parse(data, raw=False, quiet=False)
|
||||
```
|
||||
|
||||
Main text parsing function
|
||||
|
||||
Parameters:
|
||||
|
||||
data: (string) text data to parse
|
||||
raw: (boolean) output preprocessed JSON if True
|
||||
quiet: (boolean) suppress warning messages if True
|
||||
|
||||
Returns:
|
||||
|
||||
Dictionary. Raw or processed structured data.
|
||||
|
||||
@@ -48,3 +48,18 @@ Returns:
|
||||
|
||||
no return, just prints output to STDERR
|
||||
|
||||
## has_data
|
||||
```python
|
||||
has_data(data)
|
||||
```
|
||||
|
||||
Checks if the input contains data. If there are any non-whitespace characters then return True, else return False
|
||||
|
||||
Parameters:
|
||||
|
||||
data: (string) input to check whether it contains data
|
||||
|
||||
Returns:
|
||||
|
||||
Boolean True if input string (data) contains non-whitespace characters, otherwise False
|
||||
|
||||
|
||||
183
jc/cli.py
183
jc/cli.py
@@ -11,18 +11,18 @@ import importlib
|
||||
import textwrap
|
||||
import signal
|
||||
import json
|
||||
import pygments
|
||||
from pygments import highlight
|
||||
from pygments.style import Style
|
||||
from pygments.token import (Name, Number, String, Keyword)
|
||||
from pygments.lexers import JsonLexer
|
||||
from pygments.formatters import Terminal256Formatter
|
||||
import jc.utils
|
||||
import jc.appdirs as appdirs
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.11.3'
|
||||
description = 'jc cli output JSON conversion tool'
|
||||
version = '1.13.0'
|
||||
description = 'JSON CLI output utility'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
|
||||
@@ -63,6 +63,7 @@ parsers = [
|
||||
'netstat',
|
||||
'ntpq',
|
||||
'passwd',
|
||||
'ping',
|
||||
'pip-list',
|
||||
'pip-show',
|
||||
'ps',
|
||||
@@ -70,11 +71,14 @@ parsers = [
|
||||
'shadow',
|
||||
'ss',
|
||||
'stat',
|
||||
'sysctl',
|
||||
'systemctl',
|
||||
'systemctl-lj',
|
||||
'systemctl-ls',
|
||||
'systemctl-luf',
|
||||
'timedatectl',
|
||||
'tracepath',
|
||||
'traceroute',
|
||||
'uname',
|
||||
'uptime',
|
||||
'w',
|
||||
@@ -86,8 +90,8 @@ parsers = [
|
||||
# List of custom or override parsers.
|
||||
# Allow any <user_data_dir>/jc/jcparsers/*.py
|
||||
local_parsers = []
|
||||
data_dir = appdirs.user_data_dir("jc", "jc")
|
||||
local_parsers_dir = os.path.join(data_dir, "jcparsers")
|
||||
data_dir = appdirs.user_data_dir('jc', 'jc')
|
||||
local_parsers_dir = os.path.join(data_dir, 'jcparsers')
|
||||
if os.path.isdir(local_parsers_dir):
|
||||
sys.path.append(data_dir)
|
||||
for name in os.listdir(local_parsers_dir):
|
||||
@@ -98,8 +102,52 @@ if os.path.isdir(local_parsers_dir):
|
||||
parsers.append(plugin_name)
|
||||
|
||||
|
||||
def set_env_colors():
|
||||
# We only support 2.3.0+, pygments changed color names in 2.4.0.
|
||||
# startswith is sufficient and avoids potential exceptions from split and int.
|
||||
if pygments.__version__.startswith('2.3.'):
|
||||
PYGMENT_COLOR = {
|
||||
'black': '#ansiblack',
|
||||
'red': '#ansidarkred',
|
||||
'green': '#ansidarkgreen',
|
||||
'yellow': '#ansibrown',
|
||||
'blue': '#ansidarkblue',
|
||||
'magenta': '#ansipurple',
|
||||
'cyan': '#ansiteal',
|
||||
'gray': '#ansilightgray',
|
||||
'brightblack': '#ansidarkgray',
|
||||
'brightred': '#ansired',
|
||||
'brightgreen': '#ansigreen',
|
||||
'brightyellow': '#ansiyellow',
|
||||
'brightblue': '#ansiblue',
|
||||
'brightmagenta': '#ansifuchsia',
|
||||
'brightcyan': '#ansiturquoise',
|
||||
'white': '#ansiwhite',
|
||||
}
|
||||
else:
|
||||
PYGMENT_COLOR = {
|
||||
'black': 'ansiblack',
|
||||
'red': 'ansired',
|
||||
'green': 'ansigreen',
|
||||
'yellow': 'ansiyellow',
|
||||
'blue': 'ansiblue',
|
||||
'magenta': 'ansimagenta',
|
||||
'cyan': 'ansicyan',
|
||||
'gray': 'ansigray',
|
||||
'brightblack': 'ansibrightblack',
|
||||
'brightred': 'ansibrightred',
|
||||
'brightgreen': 'ansibrightgreen',
|
||||
'brightyellow': 'ansibrightyellow',
|
||||
'brightblue': 'ansibrightblue',
|
||||
'brightmagenta': 'ansibrightmagenta',
|
||||
'brightcyan': 'ansibrightcyan',
|
||||
'white': 'ansiwhite',
|
||||
}
|
||||
|
||||
|
||||
def set_env_colors(env_colors=None):
|
||||
"""
|
||||
Return a dictionary to be used in Pygments custom style class.
|
||||
|
||||
Grab custom colors from JC_COLORS environment variable. JC_COLORS env variable takes 4 comma
|
||||
separated string values and should be in the format of:
|
||||
|
||||
@@ -115,40 +163,36 @@ def set_env_colors():
|
||||
JC_COLORS=default,default,default,default
|
||||
|
||||
"""
|
||||
env_colors = os.getenv('JC_COLORS')
|
||||
input_error = False
|
||||
|
||||
if env_colors:
|
||||
color_list = env_colors.split(',')
|
||||
else:
|
||||
color_list = ['default', 'default', 'default', 'default']
|
||||
|
||||
if len(color_list) != 4:
|
||||
input_error = True
|
||||
|
||||
if env_colors and len(color_list) != 4:
|
||||
print('jc: Warning: could not parse JC_COLORS environment variable\n', file=sys.stderr)
|
||||
input_error = True
|
||||
|
||||
if env_colors:
|
||||
for color in color_list:
|
||||
if color not in ['black', 'red', 'green', 'yellow', 'blue', 'magenta', 'cyan', 'gray', 'brightblack', 'brightred',
|
||||
'brightgreen', 'brightyellow', 'brightblue', 'brightmagenta', 'brightcyan', 'white', 'default']:
|
||||
print('jc: Warning: could not parse JC_COLORS environment variable\n', file=sys.stderr)
|
||||
input_error = True
|
||||
for color in color_list:
|
||||
if color != 'default' and color not in PYGMENT_COLOR:
|
||||
input_error = True
|
||||
|
||||
# if there is an issue with the env variable, just set all colors to default and move on
|
||||
if input_error:
|
||||
print('jc: Warning: could not parse JC_COLORS environment variable\n', file=sys.stderr)
|
||||
color_list = ['default', 'default', 'default', 'default']
|
||||
|
||||
# Try the color set in the JC_COLORS env variable first. If it is set to default, then fall back to default colors
|
||||
return {
|
||||
Name.Tag: f'bold ansi{color_list[0]}' if not color_list[0] == 'default' else 'bold ansiblue', # key names
|
||||
Keyword: f'ansi{color_list[1]}' if not color_list[1] == 'default' else 'ansibrightblack', # true, false, null
|
||||
Number: f'ansi{color_list[2]}' if not color_list[2] == 'default' else 'ansimagenta', # numbers
|
||||
String: f'ansi{color_list[3]}' if not color_list[3] == 'default' else 'ansigreen' # strings
|
||||
Name.Tag: f'bold {PYGMENT_COLOR[color_list[0]]}' if not color_list[0] == 'default' else f"bold {PYGMENT_COLOR['blue']}", # key names
|
||||
Keyword: PYGMENT_COLOR[color_list[1]] if not color_list[1] == 'default' else PYGMENT_COLOR['brightblack'], # true, false, null
|
||||
Number: PYGMENT_COLOR[color_list[2]] if not color_list[2] == 'default' else PYGMENT_COLOR['magenta'], # numbers
|
||||
String: PYGMENT_COLOR[color_list[3]] if not color_list[3] == 'default' else PYGMENT_COLOR['green'] # strings
|
||||
}
|
||||
|
||||
|
||||
def piped_output():
|
||||
"""returns False if stdout is a TTY. True if output is being piped to another program"""
|
||||
"""Return False if stdout is a TTY. True if output is being piped to another program"""
|
||||
if sys.stdout.isatty():
|
||||
return False
|
||||
else:
|
||||
@@ -156,34 +200,34 @@ def piped_output():
|
||||
|
||||
|
||||
def ctrlc(signum, frame):
|
||||
"""exit with error on SIGINT"""
|
||||
"""Exit with error on SIGINT"""
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def parser_shortname(parser_argument):
|
||||
"""short name of the parser with dashes and no -- prefix"""
|
||||
"""Return short name of the parser with dashes and no -- prefix"""
|
||||
return parser_argument[2:]
|
||||
|
||||
|
||||
def parser_argument(parser):
|
||||
"""short name of the parser with dashes and with -- prefix"""
|
||||
"""Return short name of the parser with dashes and with -- prefix"""
|
||||
return f'--{parser}'
|
||||
|
||||
|
||||
def parser_mod_shortname(parser):
|
||||
"""short name of the parser's module name (no -- prefix and dashes converted to underscores)"""
|
||||
"""Return short name of the parser's module name (no -- prefix and dashes converted to underscores)"""
|
||||
return parser.replace('--', '').replace('-', '_')
|
||||
|
||||
|
||||
def parser_module(parser):
|
||||
"""import the module just in time and return the module object"""
|
||||
"""Import the module just in time and return the module object"""
|
||||
shortname = parser_mod_shortname(parser)
|
||||
path = ('jcparsers.' if shortname in local_parsers else 'jc.parsers.')
|
||||
return importlib.import_module(path + shortname)
|
||||
|
||||
|
||||
def parsers_text(indent=0, pad=0):
|
||||
"""return the argument and description information from each parser"""
|
||||
"""Return the argument and description information from each parser"""
|
||||
ptext = ''
|
||||
for parser in parsers:
|
||||
parser_arg = parser_argument(parser)
|
||||
@@ -201,7 +245,7 @@ def parsers_text(indent=0, pad=0):
|
||||
|
||||
|
||||
def about_jc():
|
||||
"""return jc info and the contents of each parser.info as a dictionary"""
|
||||
"""Return jc info and the contents of each parser.info as a dictionary"""
|
||||
parser_list = []
|
||||
|
||||
for parser in parsers:
|
||||
@@ -231,7 +275,7 @@ def about_jc():
|
||||
|
||||
|
||||
def helptext(message):
|
||||
"""return the help text with the list of parsers"""
|
||||
"""Return the help text with the list of parsers"""
|
||||
parsers_string = parsers_text(indent=12, pad=17)
|
||||
|
||||
helptext_string = f'''
|
||||
@@ -247,7 +291,7 @@ def helptext(message):
|
||||
{parsers_string}
|
||||
Options:
|
||||
-a about jc
|
||||
-d debug - show trace messages
|
||||
-d debug - show traceback (-dd for verbose traceback)
|
||||
-m monochrome output
|
||||
-p pretty print output
|
||||
-q quiet - suppress warnings
|
||||
@@ -260,30 +304,30 @@ def helptext(message):
|
||||
|
||||
jc -p ls -al
|
||||
'''
|
||||
print(textwrap.dedent(helptext_string), file=sys.stderr)
|
||||
|
||||
|
||||
def json_out(data, pretty=False, mono=False, piped_out=False):
|
||||
# set colors
|
||||
class JcStyle(Style):
|
||||
styles = set_env_colors()
|
||||
return textwrap.dedent(helptext_string)
|
||||
|
||||
|
||||
def json_out(data, pretty=False, env_colors=None, mono=False, piped_out=False):
|
||||
"""Return a JSON formatted string. String may include color codes or be pretty printed."""
|
||||
if not mono and not piped_out:
|
||||
# set colors
|
||||
class JcStyle(Style):
|
||||
styles = set_env_colors(env_colors)
|
||||
|
||||
if pretty:
|
||||
print(highlight(json.dumps(data, indent=2), JsonLexer(), Terminal256Formatter(style=JcStyle))[0:-1])
|
||||
return str(highlight(json.dumps(data, indent=2), JsonLexer(), Terminal256Formatter(style=JcStyle))[0:-1])
|
||||
else:
|
||||
print(highlight(json.dumps(data), JsonLexer(), Terminal256Formatter(style=JcStyle))[0:-1])
|
||||
return str(highlight(json.dumps(data), JsonLexer(), Terminal256Formatter(style=JcStyle))[0:-1])
|
||||
else:
|
||||
if pretty:
|
||||
print(json.dumps(data, indent=2))
|
||||
return json.dumps(data, indent=2)
|
||||
else:
|
||||
print(json.dumps(data))
|
||||
return json.dumps(data)
|
||||
|
||||
|
||||
def generate_magic_command(args):
|
||||
"""
|
||||
Returns a tuple with a boolean and a command, where the boolean signifies that
|
||||
Return a tuple with a boolean and a command, where the boolean signifies that
|
||||
the command is valid, and the command is either a command string or None.
|
||||
"""
|
||||
|
||||
@@ -316,7 +360,7 @@ def generate_magic_command(args):
|
||||
magic_dict = {}
|
||||
parser_info = about_jc()['parsers']
|
||||
|
||||
# Create a dictionary of magic_commands to their respective parsers.
|
||||
# create a dictionary of magic_commands to their respective parsers.
|
||||
for entry in parser_info:
|
||||
# Update the dict with all of the magic commands for this parser, if they exist.
|
||||
magic_dict.update({mc: entry['argument'] for mc in entry.get('magic_commands', [])})
|
||||
@@ -325,7 +369,7 @@ def generate_magic_command(args):
|
||||
one_word_command = args_given[0]
|
||||
two_word_command = ' '.join(args_given[0:2])
|
||||
|
||||
# Try to get a parser for two_word_command, otherwise get one for one_word_command
|
||||
# try to get a parser for two_word_command, otherwise get one for one_word_command
|
||||
found_parser = magic_dict.get(two_word_command, magic_dict.get(one_word_command))
|
||||
|
||||
# construct a new command line using the standard syntax: COMMAND | jc --PARSER -OPTIONS
|
||||
@@ -338,6 +382,7 @@ def generate_magic_command(args):
|
||||
|
||||
|
||||
def magic():
|
||||
"""Runs the command generated by generate_magic_command() to support magic syntax"""
|
||||
valid_command, run_command = generate_magic_command(sys.argv)
|
||||
if valid_command:
|
||||
os.system(run_command)
|
||||
@@ -345,7 +390,7 @@ def magic():
|
||||
elif run_command is None:
|
||||
return
|
||||
else:
|
||||
helptext(f'parser not found for "{run_command}"')
|
||||
print(helptext(f'parser not found for "{run_command}"'), file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
@@ -359,6 +404,8 @@ def main():
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
jc_colors = os.getenv('JC_COLORS')
|
||||
|
||||
# try magic syntax first: e.g. jc -p ls -al
|
||||
magic()
|
||||
|
||||
@@ -370,54 +417,54 @@ def main():
|
||||
options.extend(opt[1:])
|
||||
|
||||
debug = 'd' in options
|
||||
verbose_debug = True if options.count('d') > 1 else False
|
||||
mono = 'm' in options
|
||||
pretty = 'p' in options
|
||||
quiet = 'q' in options
|
||||
raw = 'r' in options
|
||||
|
||||
if verbose_debug:
|
||||
import jc.tracebackplus
|
||||
jc.tracebackplus.enable(context=11)
|
||||
|
||||
if 'a' in options:
|
||||
json_out(about_jc(), pretty=pretty, mono=mono, piped_out=piped_output())
|
||||
print(json_out(about_jc(), pretty=pretty, env_colors=jc_colors, mono=mono, piped_out=piped_output()))
|
||||
sys.exit(0)
|
||||
|
||||
if sys.stdin.isatty():
|
||||
helptext('missing piped data')
|
||||
print(helptext('missing piped data'), file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
data = sys.stdin.read()
|
||||
|
||||
found = False
|
||||
|
||||
if debug:
|
||||
for arg in sys.argv:
|
||||
parser_name = parser_shortname(arg)
|
||||
for arg in sys.argv:
|
||||
parser_name = parser_shortname(arg)
|
||||
|
||||
if parser_name in parsers:
|
||||
# load parser module just in time so we don't need to load all modules
|
||||
parser = parser_module(arg)
|
||||
if parser_name in parsers:
|
||||
# load parser module just in time so we don't need to load all modules
|
||||
parser = parser_module(arg)
|
||||
try:
|
||||
result = parser.parse(data, raw=raw, quiet=quiet)
|
||||
found = True
|
||||
break
|
||||
else:
|
||||
for arg in sys.argv:
|
||||
parser_name = parser_shortname(arg)
|
||||
|
||||
if parser_name in parsers:
|
||||
# load parser module just in time so we don't need to load all modules
|
||||
parser = parser_module(arg)
|
||||
try:
|
||||
result = parser.parse(data, raw=raw, quiet=quiet)
|
||||
found = True
|
||||
break
|
||||
except Exception:
|
||||
except Exception:
|
||||
if debug:
|
||||
raise
|
||||
else:
|
||||
import jc.utils
|
||||
jc.utils.error_message(
|
||||
f'{parser_name} parser could not parse the input data. Did you use the correct parser?\n For details use the -d option.')
|
||||
f'{parser_name} parser could not parse the input data. Did you use the correct parser?\n'
|
||||
' For details use the -d or -dd option.')
|
||||
sys.exit(1)
|
||||
|
||||
if not found:
|
||||
helptext('missing or incorrect arguments')
|
||||
print(helptext('missing or incorrect arguments'), file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
json_out(result, pretty=pretty, mono=mono, piped_out=piped_output())
|
||||
print(json_out(result, pretty=pretty, env_colors=jc_colors, mono=mono, piped_out=piped_output()))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
@@ -55,7 +55,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.0'
|
||||
version = '1.1'
|
||||
description = 'airport -I command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -131,9 +131,11 @@ def parse(data, raw=False, quiet=False):
|
||||
|
||||
raw_output = {}
|
||||
|
||||
for line in filter(None, data.splitlines()):
|
||||
linedata = line.split(':', maxsplit=1)
|
||||
raw_output[linedata[0].strip().lower().replace(' ', '_').replace('.', '_')] = linedata[1].strip()
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
for line in filter(None, data.splitlines()):
|
||||
linedata = line.split(':', maxsplit=1)
|
||||
raw_output[linedata[0].strip().lower().replace(' ', '_').replace('.', '_')] = linedata[1].strip()
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -88,7 +88,7 @@ import jc.parsers.universal
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.0'
|
||||
version = '1.2'
|
||||
description = 'airport -s command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -170,15 +170,17 @@ def parse(data, raw=False, quiet=False):
|
||||
if not quiet:
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
cleandata = data.splitlines()
|
||||
raw_output = []
|
||||
cleandata = list(filter(None, data.splitlines()))
|
||||
|
||||
# fix headers
|
||||
cleandata[0] = cleandata[0].lower()
|
||||
cleandata[0] = cleandata[0].replace('-', '_')
|
||||
cleandata[0] = cleandata[0].replace('security (auth/unicast/group)', 'security')
|
||||
if jc.utils.has_data(data):
|
||||
# fix headers
|
||||
cleandata[0] = cleandata[0].lower()
|
||||
cleandata[0] = cleandata[0].replace('-', '_')
|
||||
cleandata[0] = cleandata[0].replace('security (auth/unicast/group)', 'security')
|
||||
|
||||
# parse the data
|
||||
raw_output = jc.parsers.universal.sparse_table_parse(cleandata)
|
||||
# parse the data
|
||||
raw_output = jc.parsers.universal.sparse_table_parse(cleandata)
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -99,7 +99,7 @@ import jc.parsers.universal
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.4'
|
||||
version = '1.6'
|
||||
description = 'arp command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -171,69 +171,65 @@ def parse(data, raw=False, quiet=False):
|
||||
if not quiet:
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
cleandata = data.splitlines()
|
||||
raw_output = []
|
||||
cleandata = list(filter(None, data.splitlines()))
|
||||
|
||||
# remove final Entries row if -v was used
|
||||
if cleandata[-1].startswith('Entries:'):
|
||||
cleandata.pop(-1)
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
# detect if freebsd/osx style was used
|
||||
if cleandata[0][-1] == ']':
|
||||
raw_output = []
|
||||
for line in cleandata:
|
||||
splitline = line.split()
|
||||
output_line = {
|
||||
'name': splitline[0],
|
||||
'address': splitline[1].lstrip('(').rstrip(')'),
|
||||
'hwtype': splitline[-1].lstrip('[').rstrip(']'),
|
||||
'hwaddress': splitline[3],
|
||||
'iface': splitline[5]
|
||||
}
|
||||
# remove final Entries row if -v was used
|
||||
if cleandata[-1].startswith('Entries:'):
|
||||
cleandata.pop(-1)
|
||||
|
||||
if 'permanent' in splitline:
|
||||
output_line['permanent'] = True
|
||||
# detect if freebsd/osx style was used
|
||||
if cleandata[0][-1] == ']':
|
||||
for line in cleandata:
|
||||
splitline = line.split()
|
||||
output_line = {
|
||||
'name': splitline[0],
|
||||
'address': splitline[1].lstrip('(').rstrip(')'),
|
||||
'hwtype': splitline[-1].lstrip('[').rstrip(']'),
|
||||
'hwaddress': splitline[3],
|
||||
'iface': splitline[5]
|
||||
}
|
||||
|
||||
if 'permanent' in splitline:
|
||||
output_line['permanent'] = True
|
||||
else:
|
||||
output_line['permanent'] = False
|
||||
|
||||
if 'expires' in splitline:
|
||||
output_line['expires'] = splitline[-3]
|
||||
|
||||
raw_output.append(output_line)
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
else:
|
||||
output_line['permanent'] = False
|
||||
return process(raw_output)
|
||||
|
||||
if 'expires' in splitline:
|
||||
output_line['expires'] = splitline[-3]
|
||||
# detect if linux style was used
|
||||
elif cleandata[0].startswith('Address'):
|
||||
|
||||
raw_output.append(output_line)
|
||||
# fix header row to change Flags Mask to flags_mask
|
||||
cleandata[0] = cleandata[0].replace('Flags Mask', 'flags_mask')
|
||||
cleandata[0] = cleandata[0].lower()
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
raw_output = jc.parsers.universal.simple_table_parse(cleandata)
|
||||
|
||||
# otherwise, try bsd style
|
||||
else:
|
||||
return process(raw_output)
|
||||
for line in cleandata:
|
||||
line = line.split()
|
||||
output_line = {
|
||||
'name': line[0],
|
||||
'address': line[1].lstrip('(').rstrip(')'),
|
||||
'hwtype': line[4].lstrip('[').rstrip(']'),
|
||||
'hwaddress': line[3],
|
||||
'iface': line[6],
|
||||
}
|
||||
raw_output.append(output_line)
|
||||
|
||||
# detect if linux style was used
|
||||
elif cleandata[0].startswith('Address'):
|
||||
|
||||
# fix header row to change Flags Mask to flags_mask
|
||||
cleandata[0] = cleandata[0].replace('Flags Mask', 'flags_mask')
|
||||
cleandata[0] = cleandata[0].lower()
|
||||
|
||||
raw_output = jc.parsers.universal.simple_table_parse(cleandata)
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
else:
|
||||
return process(raw_output)
|
||||
|
||||
# otherwise, try bsd style
|
||||
if raw:
|
||||
return raw_output
|
||||
else:
|
||||
raw_output = []
|
||||
for line in cleandata:
|
||||
line = line.split()
|
||||
output_line = {
|
||||
'name': line[0],
|
||||
'address': line[1].lstrip('(').rstrip(')'),
|
||||
'hwtype': line[4].lstrip('[').rstrip(']'),
|
||||
'hwaddress': line[3],
|
||||
'iface': line[6],
|
||||
}
|
||||
raw_output.append(output_line)
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
else:
|
||||
return process(raw_output)
|
||||
return process(raw_output)
|
||||
|
||||
@@ -79,7 +79,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.0'
|
||||
version = '1.2'
|
||||
description = 'blkid command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -176,7 +176,8 @@ def parse(data, raw=False, quiet=False):
|
||||
|
||||
raw_output = []
|
||||
|
||||
if data:
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
# if the first field is a device, use normal parsing:
|
||||
if data.split(maxsplit=1)[0][-1] == ':':
|
||||
linedata = data.splitlines()
|
||||
|
||||
@@ -132,7 +132,7 @@ import jc.parsers.universal
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.2'
|
||||
version = '1.4'
|
||||
description = 'crontab command and file parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -225,44 +225,46 @@ def parse(data, raw=False, quiet=False):
|
||||
# Clear any blank lines
|
||||
cleandata = list(filter(None, cleandata))
|
||||
|
||||
# Clear any commented lines
|
||||
for i, line in reversed(list(enumerate(cleandata))):
|
||||
if line.strip().startswith('#'):
|
||||
cleandata.pop(i)
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
# Pop any variable assignment lines
|
||||
cron_var = []
|
||||
for i, line in reversed(list(enumerate(cleandata))):
|
||||
if '=' in line:
|
||||
var_line = cleandata.pop(i)
|
||||
var_name = var_line.split('=', maxsplit=1)[0].strip()
|
||||
var_value = var_line.split('=', maxsplit=1)[1].strip()
|
||||
cron_var.append({'name': var_name,
|
||||
'value': var_value})
|
||||
# Clear any commented lines
|
||||
for i, line in reversed(list(enumerate(cleandata))):
|
||||
if line.strip().startswith('#'):
|
||||
cleandata.pop(i)
|
||||
|
||||
raw_output['variables'] = cron_var
|
||||
# Pop any variable assignment lines
|
||||
cron_var = []
|
||||
for i, line in reversed(list(enumerate(cleandata))):
|
||||
if '=' in line:
|
||||
var_line = cleandata.pop(i)
|
||||
var_name = var_line.split('=', maxsplit=1)[0].strip()
|
||||
var_value = var_line.split('=', maxsplit=1)[1].strip()
|
||||
cron_var.append({'name': var_name,
|
||||
'value': var_value})
|
||||
|
||||
# Pop any shortcut lines
|
||||
shortcut_list = []
|
||||
for i, line in reversed(list(enumerate(cleandata))):
|
||||
if line.strip().startswith('@'):
|
||||
shortcut_line = cleandata.pop(i)
|
||||
occurrence = shortcut_line.split(maxsplit=1)[0].strip().lstrip('@')
|
||||
cmd = shortcut_line.split(maxsplit=1)[1].strip()
|
||||
shortcut_list.append({'occurrence': occurrence,
|
||||
'command': cmd})
|
||||
raw_output['variables'] = cron_var
|
||||
|
||||
# Add header row for parsing
|
||||
cleandata[:0] = ['minute hour day_of_month month day_of_week command']
|
||||
# Pop any shortcut lines
|
||||
shortcut_list = []
|
||||
for i, line in reversed(list(enumerate(cleandata))):
|
||||
if line.strip().startswith('@'):
|
||||
shortcut_line = cleandata.pop(i)
|
||||
occurrence = shortcut_line.split(maxsplit=1)[0].strip().lstrip('@')
|
||||
cmd = shortcut_line.split(maxsplit=1)[1].strip()
|
||||
shortcut_list.append({'occurrence': occurrence,
|
||||
'command': cmd})
|
||||
|
||||
if len(cleandata) > 1:
|
||||
cron_list = jc.parsers.universal.simple_table_parse(cleandata)
|
||||
# Add header row for parsing
|
||||
cleandata[:0] = ['minute hour day_of_month month day_of_week command']
|
||||
|
||||
raw_output['schedule'] = cron_list
|
||||
if len(cleandata) > 1:
|
||||
cron_list = jc.parsers.universal.simple_table_parse(cleandata)
|
||||
|
||||
# Add shortcut entries back in
|
||||
for item in shortcut_list:
|
||||
raw_output['schedule'].append(item)
|
||||
raw_output['schedule'] = cron_list
|
||||
|
||||
# Add shortcut entries back in
|
||||
for item in shortcut_list:
|
||||
raw_output['schedule'].append(item)
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -133,7 +133,7 @@ import jc.parsers.universal
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.1'
|
||||
version = '1.3'
|
||||
description = 'crontab file parser with user support'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -226,46 +226,48 @@ def parse(data, raw=False, quiet=False):
|
||||
# Clear any blank lines
|
||||
cleandata = list(filter(None, cleandata))
|
||||
|
||||
# Clear any commented lines
|
||||
for i, line in reversed(list(enumerate(cleandata))):
|
||||
if line.strip().startswith('#'):
|
||||
cleandata.pop(i)
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
# Pop any variable assignment lines
|
||||
cron_var = []
|
||||
for i, line in reversed(list(enumerate(cleandata))):
|
||||
if '=' in line:
|
||||
var_line = cleandata.pop(i)
|
||||
var_name = var_line.split('=', maxsplit=1)[0].strip()
|
||||
var_value = var_line.split('=', maxsplit=1)[1].strip()
|
||||
cron_var.append({'name': var_name,
|
||||
'value': var_value})
|
||||
# Clear any commented lines
|
||||
for i, line in reversed(list(enumerate(cleandata))):
|
||||
if line.strip().startswith('#'):
|
||||
cleandata.pop(i)
|
||||
|
||||
raw_output['variables'] = cron_var
|
||||
# Pop any variable assignment lines
|
||||
cron_var = []
|
||||
for i, line in reversed(list(enumerate(cleandata))):
|
||||
if '=' in line:
|
||||
var_line = cleandata.pop(i)
|
||||
var_name = var_line.split('=', maxsplit=1)[0].strip()
|
||||
var_value = var_line.split('=', maxsplit=1)[1].strip()
|
||||
cron_var.append({'name': var_name,
|
||||
'value': var_value})
|
||||
|
||||
# Pop any shortcut lines
|
||||
shortcut_list = []
|
||||
for i, line in reversed(list(enumerate(cleandata))):
|
||||
if line.strip().startswith('@'):
|
||||
shortcut_line = cleandata.pop(i)
|
||||
occurrence = shortcut_line.split(maxsplit=1)[0].strip().lstrip('@')
|
||||
usr = shortcut_line.split(maxsplit=2)[1].strip()
|
||||
cmd = shortcut_line.split(maxsplit=2)[2].strip()
|
||||
shortcut_list.append({'occurrence': occurrence,
|
||||
'user': usr,
|
||||
'command': cmd})
|
||||
raw_output['variables'] = cron_var
|
||||
|
||||
# Add header row for parsing
|
||||
cleandata[:0] = ['minute hour day_of_month month day_of_week user command']
|
||||
# Pop any shortcut lines
|
||||
shortcut_list = []
|
||||
for i, line in reversed(list(enumerate(cleandata))):
|
||||
if line.strip().startswith('@'):
|
||||
shortcut_line = cleandata.pop(i)
|
||||
occurrence = shortcut_line.split(maxsplit=1)[0].strip().lstrip('@')
|
||||
usr = shortcut_line.split(maxsplit=2)[1].strip()
|
||||
cmd = shortcut_line.split(maxsplit=2)[2].strip()
|
||||
shortcut_list.append({'occurrence': occurrence,
|
||||
'user': usr,
|
||||
'command': cmd})
|
||||
|
||||
if len(cleandata) > 1:
|
||||
cron_list = jc.parsers.universal.simple_table_parse(cleandata)
|
||||
# Add header row for parsing
|
||||
cleandata[:0] = ['minute hour day_of_month month day_of_week user command']
|
||||
|
||||
raw_output['schedule'] = cron_list
|
||||
if len(cleandata) > 1:
|
||||
cron_list = jc.parsers.universal.simple_table_parse(cleandata)
|
||||
|
||||
# Add shortcut entries back in
|
||||
for item in shortcut_list:
|
||||
raw_output['schedule'].append(item)
|
||||
raw_output['schedule'] = cron_list
|
||||
|
||||
# Add shortcut entries back in
|
||||
for item in shortcut_list:
|
||||
raw_output['schedule'].append(item)
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -63,7 +63,7 @@ import csv
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.0'
|
||||
version = '1.1'
|
||||
description = 'CSV file parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -123,7 +123,8 @@ def parse(data, raw=False, quiet=False):
|
||||
# Clear any blank lines
|
||||
cleandata = list(filter(None, cleandata))
|
||||
|
||||
if cleandata:
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
dialect = None
|
||||
try:
|
||||
dialect = csv.Sniffer().sniff(data[:1024])
|
||||
|
||||
@@ -73,7 +73,7 @@ import jc.parsers.universal
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.3'
|
||||
version = '1.5'
|
||||
description = 'df command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -184,14 +184,17 @@ def parse(data, raw=False, quiet=False):
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
cleandata = data.splitlines()
|
||||
raw_output = []
|
||||
|
||||
# fix headers
|
||||
cleandata[0] = cleandata[0].lower()
|
||||
cleandata[0] = cleandata[0].replace('-', '_')
|
||||
cleandata[0] = cleandata[0].replace('mounted on', 'mounted_on')
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
# parse the data
|
||||
raw_output = jc.parsers.universal.sparse_table_parse(cleandata)
|
||||
# fix headers
|
||||
cleandata[0] = cleandata[0].lower()
|
||||
cleandata[0] = cleandata[0].replace('-', '_')
|
||||
cleandata[0] = cleandata[0].replace('mounted on', 'mounted_on')
|
||||
|
||||
# parse the data
|
||||
raw_output = jc.parsers.universal.sparse_table_parse(cleandata)
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -324,7 +324,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.2'
|
||||
version = '1.3'
|
||||
description = 'dig command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -574,100 +574,103 @@ def parse(data, raw=False, quiet=False):
|
||||
axfr = False
|
||||
|
||||
output_entry = {}
|
||||
for line in cleandata:
|
||||
|
||||
if line.startswith('; <<>> ') and ' axfr ' in line.lower():
|
||||
question = False
|
||||
authority = False
|
||||
answer = False
|
||||
axfr = True
|
||||
axfr_list = []
|
||||
continue
|
||||
if jc.utils.has_data(data):
|
||||
for line in cleandata:
|
||||
|
||||
if ';' not in line and axfr:
|
||||
axfr_list.append(parse_axfr(line))
|
||||
output_entry.update({'axfr': axfr_list})
|
||||
continue
|
||||
if line.startswith('; <<>> ') and ' axfr ' in line.lower():
|
||||
question = False
|
||||
authority = False
|
||||
answer = False
|
||||
axfr = True
|
||||
axfr_list = []
|
||||
continue
|
||||
|
||||
if line.startswith(';; ->>HEADER<<-'):
|
||||
output_entry = {}
|
||||
output_entry.update(parse_header(line))
|
||||
continue
|
||||
if ';' not in line and axfr:
|
||||
axfr_list.append(parse_axfr(line))
|
||||
output_entry.update({'axfr': axfr_list})
|
||||
continue
|
||||
|
||||
if line.startswith(';; flags:'):
|
||||
output_entry.update(parse_flags_line(line))
|
||||
continue
|
||||
if line.startswith(';; ->>HEADER<<-'):
|
||||
output_entry = {}
|
||||
output_entry.update(parse_header(line))
|
||||
continue
|
||||
|
||||
if line.startswith(';; QUESTION SECTION:'):
|
||||
question = True
|
||||
authority = False
|
||||
answer = False
|
||||
axfr = False
|
||||
continue
|
||||
if line.startswith(';; flags:'):
|
||||
output_entry.update(parse_flags_line(line))
|
||||
continue
|
||||
|
||||
if question:
|
||||
output_entry['question'] = parse_question(line)
|
||||
question = False
|
||||
authority = False
|
||||
answer = False
|
||||
axfr = False
|
||||
continue
|
||||
if line.startswith(';; QUESTION SECTION:'):
|
||||
question = True
|
||||
authority = False
|
||||
answer = False
|
||||
axfr = False
|
||||
continue
|
||||
|
||||
if line.startswith(';; AUTHORITY SECTION:'):
|
||||
question = False
|
||||
authority = True
|
||||
answer = False
|
||||
axfr = False
|
||||
authority_list = []
|
||||
continue
|
||||
if question:
|
||||
output_entry['question'] = parse_question(line)
|
||||
question = False
|
||||
authority = False
|
||||
answer = False
|
||||
axfr = False
|
||||
continue
|
||||
|
||||
if ';' not in line and authority:
|
||||
authority_list.append(parse_authority(line))
|
||||
output_entry.update({'authority': authority_list})
|
||||
continue
|
||||
if line.startswith(';; AUTHORITY SECTION:'):
|
||||
question = False
|
||||
authority = True
|
||||
answer = False
|
||||
axfr = False
|
||||
authority_list = []
|
||||
continue
|
||||
|
||||
if line.startswith(';; ANSWER SECTION:'):
|
||||
question = False
|
||||
authority = False
|
||||
answer = True
|
||||
axfr = False
|
||||
answer_list = []
|
||||
continue
|
||||
if ';' not in line and authority:
|
||||
authority_list.append(parse_authority(line))
|
||||
output_entry.update({'authority': authority_list})
|
||||
continue
|
||||
|
||||
if ';' not in line and answer:
|
||||
answer_list.append(parse_answer(line))
|
||||
output_entry.update({'answer': answer_list})
|
||||
continue
|
||||
if line.startswith(';; ANSWER SECTION:'):
|
||||
question = False
|
||||
authority = False
|
||||
answer = True
|
||||
axfr = False
|
||||
answer_list = []
|
||||
continue
|
||||
|
||||
# footer consists of 4 lines
|
||||
# footer line 1
|
||||
if line.startswith(';; Query time:'):
|
||||
output_entry.update({'query_time': line.split(':')[1].lstrip()})
|
||||
continue
|
||||
if ';' not in line and answer:
|
||||
answer_list.append(parse_answer(line))
|
||||
output_entry.update({'answer': answer_list})
|
||||
continue
|
||||
|
||||
# footer line 2
|
||||
if line.startswith(';; SERVER:'):
|
||||
output_entry.update({'server': line.split(':')[1].lstrip()})
|
||||
continue
|
||||
# footer consists of 4 lines
|
||||
# footer line 1
|
||||
if line.startswith(';; Query time:'):
|
||||
output_entry.update({'query_time': line.split(':')[1].lstrip()})
|
||||
continue
|
||||
|
||||
# footer line 3
|
||||
if line.startswith(';; WHEN:'):
|
||||
output_entry.update({'when': line.split(':', maxsplit=1)[1].lstrip()})
|
||||
continue
|
||||
# footer line 2
|
||||
if line.startswith(';; SERVER:'):
|
||||
output_entry.update({'server': line.split(':')[1].lstrip()})
|
||||
continue
|
||||
|
||||
# footer line 4 (last line)
|
||||
if line.startswith(';; MSG SIZE rcvd:'):
|
||||
output_entry.update({'rcvd': line.split(':')[1].lstrip()})
|
||||
# footer line 3
|
||||
if line.startswith(';; WHEN:'):
|
||||
output_entry.update({'when': line.split(':', maxsplit=1)[1].lstrip()})
|
||||
continue
|
||||
|
||||
if output_entry:
|
||||
raw_output.append(output_entry)
|
||||
elif line.startswith(';; XFR size:'):
|
||||
output_entry.update({'size': line.split(':')[1].lstrip()})
|
||||
# footer line 4 (last line)
|
||||
if line.startswith(';; MSG SIZE rcvd:'):
|
||||
output_entry.update({'rcvd': line.split(':')[1].lstrip()})
|
||||
|
||||
if output_entry:
|
||||
raw_output.append(output_entry)
|
||||
if output_entry:
|
||||
raw_output.append(output_entry)
|
||||
elif line.startswith(';; XFR size:'):
|
||||
output_entry.update({'size': line.split(':')[1].lstrip()})
|
||||
|
||||
if output_entry:
|
||||
raw_output.append(output_entry)
|
||||
|
||||
raw_output = list(filter(None, raw_output))
|
||||
|
||||
raw_output = list(filter(None, raw_output))
|
||||
if raw:
|
||||
return raw_output
|
||||
else:
|
||||
|
||||
@@ -102,7 +102,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.0'
|
||||
version = '1.1'
|
||||
description = 'dmidecode command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -190,148 +190,150 @@ def parse(data, raw=False, quiet=False):
|
||||
|
||||
raw_output = []
|
||||
|
||||
data = data.splitlines()
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
# remove header rows
|
||||
for row in data.copy():
|
||||
if row:
|
||||
data.pop(0)
|
||||
else:
|
||||
break
|
||||
data = data.splitlines()
|
||||
|
||||
# main parsing loop
|
||||
for line in data:
|
||||
# new item
|
||||
if not line:
|
||||
item_header = True
|
||||
item_values = False
|
||||
value_list = False
|
||||
# remove header rows
|
||||
for row in data.copy():
|
||||
if row:
|
||||
data.pop(0)
|
||||
else:
|
||||
break
|
||||
|
||||
# main parsing loop
|
||||
for line in data:
|
||||
# new item
|
||||
if not line:
|
||||
item_header = True
|
||||
item_values = False
|
||||
value_list = False
|
||||
|
||||
if item:
|
||||
if values:
|
||||
item['values'][attribute] = values
|
||||
if key_data:
|
||||
item['values'][f'{key}_data'] = key_data
|
||||
raw_output.append(item)
|
||||
|
||||
item = {}
|
||||
header = None
|
||||
key = None
|
||||
val = None
|
||||
attribute = None
|
||||
values = []
|
||||
key_data = []
|
||||
continue
|
||||
|
||||
# header
|
||||
if line.startswith('Handle ') and line.endswith('bytes'):
|
||||
|
||||
# Handle 0x0000, DMI type 0, 24 bytes
|
||||
header = line.replace(',', ' ').split()
|
||||
item = {
|
||||
'handle': header[1],
|
||||
'type': header[4],
|
||||
'bytes': header[5]
|
||||
}
|
||||
continue
|
||||
|
||||
# description
|
||||
if item_header:
|
||||
item_header = False
|
||||
item_values = True
|
||||
value_list = False
|
||||
|
||||
item['description'] = line
|
||||
item['values'] = {}
|
||||
continue
|
||||
|
||||
# new item if multiple descriptions in handle
|
||||
if not item_header and not line.startswith('\t'):
|
||||
item_header = False
|
||||
item_values = True
|
||||
value_list = False
|
||||
|
||||
if item:
|
||||
if values:
|
||||
item['values'][attribute] = values
|
||||
if key_data:
|
||||
item['values'][f'{key}_data'] = key_data
|
||||
raw_output.append(item)
|
||||
|
||||
item = {
|
||||
'handle': header[1],
|
||||
'type': header[4],
|
||||
'bytes': header[5],
|
||||
'description': line,
|
||||
'values': {}
|
||||
}
|
||||
|
||||
key = None
|
||||
val = None
|
||||
attribute = None
|
||||
values = []
|
||||
key_data = []
|
||||
continue
|
||||
|
||||
# keys and values
|
||||
if item_values \
|
||||
and len(line.split(':', maxsplit=1)) == 2 \
|
||||
and line.startswith('\t') \
|
||||
and not line.startswith('\t\t') \
|
||||
and not line.strip().endswith(':'):
|
||||
item_header = False
|
||||
item_values = True
|
||||
value_list = False
|
||||
|
||||
if item:
|
||||
if values:
|
||||
item['values'][attribute] = values
|
||||
values = []
|
||||
if key_data:
|
||||
item['values'][f'{key}_data'] = key_data
|
||||
raw_output.append(item)
|
||||
key_data = []
|
||||
|
||||
item = {}
|
||||
header = None
|
||||
key = None
|
||||
val = None
|
||||
attribute = None
|
||||
values = []
|
||||
key_data = []
|
||||
continue
|
||||
key = line.split(':', maxsplit=1)[0].strip().lower().replace(' ', '_')
|
||||
val = line.split(':', maxsplit=1)[1].strip()
|
||||
item['values'].update({key: val})
|
||||
continue
|
||||
|
||||
# header
|
||||
if line.startswith('Handle ') and line.endswith('bytes'):
|
||||
# multi-line key
|
||||
if item_values \
|
||||
and line.startswith('\t') \
|
||||
and not line.startswith('\t\t') \
|
||||
and line.strip().endswith(':'):
|
||||
item_header = False
|
||||
item_values = True
|
||||
value_list = True
|
||||
|
||||
# Handle 0x0000, DMI type 0, 24 bytes
|
||||
header = line.replace(',', ' ').split()
|
||||
item = {
|
||||
'handle': header[1],
|
||||
'type': header[4],
|
||||
'bytes': header[5]
|
||||
}
|
||||
continue
|
||||
|
||||
# description
|
||||
if item_header:
|
||||
item_header = False
|
||||
item_values = True
|
||||
value_list = False
|
||||
|
||||
item['description'] = line
|
||||
item['values'] = {}
|
||||
continue
|
||||
|
||||
# new item if multiple descriptions in handle
|
||||
if not item_header and not line.startswith('\t'):
|
||||
item_header = False
|
||||
item_values = True
|
||||
value_list = False
|
||||
|
||||
if item:
|
||||
if values:
|
||||
item['values'][attribute] = values
|
||||
values = []
|
||||
if key_data:
|
||||
item['values'][f'{key}_data'] = key_data
|
||||
raw_output.append(item)
|
||||
key_data = []
|
||||
|
||||
item = {
|
||||
'handle': header[1],
|
||||
'type': header[4],
|
||||
'bytes': header[5],
|
||||
'description': line,
|
||||
'values': {}
|
||||
}
|
||||
|
||||
key = None
|
||||
val = None
|
||||
attribute = None
|
||||
values = []
|
||||
key_data = []
|
||||
continue
|
||||
|
||||
# keys and values
|
||||
if item_values \
|
||||
and len(line.split(':', maxsplit=1)) == 2 \
|
||||
and line.startswith('\t') \
|
||||
and not line.startswith('\t\t') \
|
||||
and not line.strip().endswith(':'):
|
||||
item_header = False
|
||||
item_values = True
|
||||
value_list = False
|
||||
|
||||
if values:
|
||||
item['values'][attribute] = values
|
||||
attribute = line[:-1].strip().lower().replace(' ', '_')
|
||||
values = []
|
||||
if key_data:
|
||||
item['values'][f'{key}_data'] = key_data
|
||||
key_data = []
|
||||
continue
|
||||
|
||||
key = line.split(':', maxsplit=1)[0].strip().lower().replace(' ', '_')
|
||||
val = line.split(':', maxsplit=1)[1].strip()
|
||||
item['values'].update({key: val})
|
||||
continue
|
||||
# multi-line values
|
||||
if value_list \
|
||||
and line.startswith('\t\t'):
|
||||
values.append(line.strip())
|
||||
continue
|
||||
|
||||
# multi-line key
|
||||
if item_values \
|
||||
and line.startswith('\t') \
|
||||
and not line.startswith('\t\t') \
|
||||
and line.strip().endswith(':'):
|
||||
item_header = False
|
||||
item_values = True
|
||||
value_list = True
|
||||
# data for hybrid multi-line objects
|
||||
if item_values \
|
||||
and not value_list \
|
||||
and line.startswith('\t\t'):
|
||||
if f'{key}_data' not in item['values']:
|
||||
item['values'][f'{key}_data'] = []
|
||||
key_data.append(line.strip())
|
||||
continue
|
||||
|
||||
if values:
|
||||
item['values'][attribute] = values
|
||||
values = []
|
||||
if key_data:
|
||||
item['values'][f'{key}_data'] = key_data
|
||||
key_data = []
|
||||
|
||||
attribute = line[:-1].strip().lower().replace(' ', '_')
|
||||
values = []
|
||||
continue
|
||||
|
||||
# multi-line values
|
||||
if value_list \
|
||||
and line.startswith('\t\t'):
|
||||
values.append(line.strip())
|
||||
continue
|
||||
|
||||
# data for hybrid multi-line objects
|
||||
if item_values \
|
||||
and not value_list \
|
||||
and line.startswith('\t\t'):
|
||||
if f'{key}_data' not in item['values']:
|
||||
item['values'][f'{key}_data'] = []
|
||||
key_data.append(line.strip())
|
||||
continue
|
||||
|
||||
if item:
|
||||
raw_output.append(item)
|
||||
if item:
|
||||
raw_output.append(item)
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -73,7 +73,7 @@ import jc.parsers.universal
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.1'
|
||||
version = '1.2'
|
||||
description = 'du command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -137,12 +137,12 @@ def parse(data, raw=False, quiet=False):
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
raw_output = []
|
||||
cleandata = data.splitlines()
|
||||
|
||||
# Clear any blank lines
|
||||
cleandata = list(filter(None, cleandata))
|
||||
cleandata = list(filter(None, data.splitlines()))
|
||||
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
if cleandata:
|
||||
cleandata.insert(0, 'size name')
|
||||
raw_output = jc.parsers.universal.simple_table_parse(cleandata)
|
||||
|
||||
|
||||
@@ -52,7 +52,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.1'
|
||||
version = '1.2'
|
||||
description = 'env command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -116,12 +116,10 @@ def parse(data, raw=False, quiet=False):
|
||||
|
||||
raw_output = {}
|
||||
|
||||
linedata = data.splitlines()
|
||||
|
||||
# Clear any blank lines
|
||||
cleandata = list(filter(None, linedata))
|
||||
cleandata = list(filter(None, data.splitlines()))
|
||||
|
||||
if cleandata:
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
for entry in cleandata:
|
||||
parsed_line = entry.split('=', maxsplit=1)
|
||||
|
||||
@@ -48,7 +48,7 @@ import jc.parsers.universal
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.1'
|
||||
version = '1.2'
|
||||
description = 'file command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -104,23 +104,26 @@ def parse(data, raw=False, quiet=False):
|
||||
raw_output = []
|
||||
|
||||
warned = False
|
||||
for line in filter(None, data.splitlines()):
|
||||
linedata = line.rsplit(': ', maxsplit=1)
|
||||
|
||||
try:
|
||||
filename = linedata[0].strip()
|
||||
filetype = linedata[1].strip()
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
raw_output.append(
|
||||
{
|
||||
'filename': filename,
|
||||
'type': filetype
|
||||
}
|
||||
)
|
||||
except IndexError:
|
||||
if not warned:
|
||||
jc.utils.warning_message('Filenames with newline characters detected. Some filenames may be truncated.')
|
||||
warned = True
|
||||
for line in filter(None, data.splitlines()):
|
||||
linedata = line.rsplit(': ', maxsplit=1)
|
||||
|
||||
try:
|
||||
filename = linedata[0].strip()
|
||||
filetype = linedata[1].strip()
|
||||
|
||||
raw_output.append(
|
||||
{
|
||||
'filename': filename,
|
||||
'type': filetype
|
||||
}
|
||||
)
|
||||
except IndexError:
|
||||
if not warned:
|
||||
jc.utils.warning_message('Filenames with newline characters detected. Some filenames may be truncated.')
|
||||
warned = True
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -78,9 +78,11 @@ def parse(data, raw=False, quiet=False):
|
||||
|
||||
raw_output = []
|
||||
|
||||
for line in filter(None, data.splitlines()):
|
||||
# parse the content
|
||||
pass
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
for line in filter(None, data.splitlines()):
|
||||
# parse the content
|
||||
pass
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -53,7 +53,7 @@ import jc.parsers.universal
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.0'
|
||||
version = '1.2'
|
||||
description = 'free command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -122,14 +122,18 @@ def parse(data, raw=False, quiet=False):
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
cleandata = data.splitlines()
|
||||
cleandata[0] = cleandata[0].lower()
|
||||
cleandata[0] = cleandata[0].replace('buff/cache', 'buff_cache')
|
||||
cleandata[0] = 'type ' + cleandata[0]
|
||||
raw_output = []
|
||||
|
||||
raw_output = jc.parsers.universal.simple_table_parse(cleandata)
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
for entry in raw_output:
|
||||
entry['type'] = entry['type'].rstrip(':')
|
||||
cleandata[0] = cleandata[0].lower()
|
||||
cleandata[0] = cleandata[0].replace('buff/cache', 'buff_cache')
|
||||
cleandata[0] = 'type ' + cleandata[0]
|
||||
|
||||
raw_output = jc.parsers.universal.simple_table_parse(cleandata)
|
||||
|
||||
for entry in raw_output:
|
||||
entry['type'] = entry['type'].rstrip(':')
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -70,7 +70,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.2'
|
||||
version = '1.3'
|
||||
description = 'fstab file parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -141,7 +141,8 @@ def parse(data, raw=False, quiet=False):
|
||||
# Clear any blank lines
|
||||
cleandata = list(filter(None, cleandata))
|
||||
|
||||
if cleandata:
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
for line in cleandata:
|
||||
output_line = {}
|
||||
# ignore commented lines
|
||||
|
||||
@@ -94,7 +94,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.0'
|
||||
version = '1.1'
|
||||
description = '/etc/group file parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -169,7 +169,8 @@ def parse(data, raw=False, quiet=False):
|
||||
# Clear any blank lines
|
||||
cleandata = list(filter(None, cleandata))
|
||||
|
||||
if cleandata:
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
for entry in cleandata:
|
||||
if entry.startswith('#'):
|
||||
continue
|
||||
|
||||
@@ -60,7 +60,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.0'
|
||||
version = '1.1'
|
||||
description = '/etc/gshadow file parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -131,7 +131,8 @@ def parse(data, raw=False, quiet=False):
|
||||
# Clear any blank lines
|
||||
cleandata = list(filter(None, cleandata))
|
||||
|
||||
if cleandata:
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
for entry in cleandata:
|
||||
if entry.startswith('#'):
|
||||
continue
|
||||
|
||||
@@ -44,7 +44,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.2'
|
||||
version = '1.3'
|
||||
description = 'history command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -108,17 +108,19 @@ def parse(data, raw=False, quiet=False):
|
||||
|
||||
raw_output = {}
|
||||
|
||||
# split lines and clear out any non-ascii chars
|
||||
linedata = data.encode('ascii', errors='ignore').decode().splitlines()
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
# Skip any blank lines
|
||||
for entry in filter(None, linedata):
|
||||
try:
|
||||
parsed_line = entry.split(maxsplit=1)
|
||||
raw_output[parsed_line[0]] = parsed_line[1]
|
||||
except IndexError:
|
||||
# need to catch indexerror in case there is weird input from prior commands
|
||||
pass
|
||||
# split lines and clear out any non-ascii chars
|
||||
linedata = data.encode('ascii', errors='ignore').decode().splitlines()
|
||||
|
||||
# Skip any blank lines
|
||||
for entry in filter(None, linedata):
|
||||
try:
|
||||
parsed_line = entry.split(maxsplit=1)
|
||||
raw_output[parsed_line[0]] = parsed_line[1]
|
||||
except IndexError:
|
||||
# need to catch indexerror in case there is weird input from prior commands
|
||||
pass
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -61,7 +61,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.1'
|
||||
version = '1.2'
|
||||
description = '/etc/hosts file parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -117,12 +117,12 @@ def parse(data, raw=False, quiet=False):
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
raw_output = []
|
||||
cleandata = data.splitlines()
|
||||
|
||||
# Clear any blank lines
|
||||
cleandata = list(filter(None, cleandata))
|
||||
cleandata = list(filter(None, data.splitlines()))
|
||||
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
if cleandata:
|
||||
for line in cleandata:
|
||||
output_line = {}
|
||||
# ignore commented lines
|
||||
|
||||
@@ -70,7 +70,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.0'
|
||||
version = '1.1'
|
||||
description = 'id command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -166,12 +166,12 @@ def parse(data, raw=False, quiet=False):
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
raw_output = {}
|
||||
cleandata = data.split()
|
||||
|
||||
# Clear any blank lines
|
||||
cleandata = list(filter(None, cleandata))
|
||||
cleandata = list(filter(None, data.split()))
|
||||
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
if cleandata:
|
||||
for section in cleandata:
|
||||
if section.startswith('uid'):
|
||||
uid_parsed = section.replace('(', '=').replace(')', '=')
|
||||
|
||||
@@ -147,7 +147,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.7'
|
||||
version = '1.8'
|
||||
description = 'ifconfig command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -475,14 +475,16 @@ def parse(data, raw=False, quiet=False):
|
||||
|
||||
raw_output = []
|
||||
|
||||
parsed = IfconfigParser(console_output=data)
|
||||
interfaces = parsed.get_interfaces()
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
# convert ifconfigparser output to a dictionary
|
||||
for iface in interfaces:
|
||||
d = interfaces[iface]._asdict()
|
||||
dct = dict(d)
|
||||
raw_output.append(dct)
|
||||
parsed = IfconfigParser(console_output=data)
|
||||
interfaces = parsed.get_interfaces()
|
||||
|
||||
# convert ifconfigparser output to a dictionary
|
||||
for iface in interfaces:
|
||||
d = interfaces[iface]._asdict()
|
||||
dct = dict(d)
|
||||
raw_output.append(dct)
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -2,7 +2,9 @@
|
||||
|
||||
Usage:
|
||||
|
||||
specify --ini as the first argument if the piped input is coming from an INI file
|
||||
Specify --ini as the first argument if the piped input is coming from an INI file or any
|
||||
simple key/value pair file. Delimiter can be '=' or ':'. Missing values are supported.
|
||||
Comment prefix can be '#' or ';'. Comments must be on their own line.
|
||||
|
||||
Compatibility:
|
||||
|
||||
@@ -47,8 +49,8 @@ import configparser
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.0'
|
||||
description = 'INI file parser'
|
||||
version = '1.2'
|
||||
description = 'INI file parser. Also parses files/output containing simple key/value pairs'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
details = 'Using configparser from the standard library'
|
||||
@@ -70,15 +72,33 @@ def process(proc_data):
|
||||
|
||||
Returns:
|
||||
|
||||
Dictionary representing an ini document:
|
||||
Dictionary representing an ini or simple key/value pair document:
|
||||
|
||||
{
|
||||
ini document converted to a dictionary
|
||||
see configparser standard library documentation for more details
|
||||
ini or key/value document converted to a dictionary - see configparser standard
|
||||
library documentation for more details.
|
||||
|
||||
Note: Values starting and ending with quotation marks will have the marks removed.
|
||||
If you would like to keep the quotation marks, use the -r or raw=True argument.
|
||||
}
|
||||
"""
|
||||
# remove quotation marks from beginning and end of values
|
||||
for heading in proc_data:
|
||||
# standard ini files with headers
|
||||
if isinstance(proc_data[heading], dict):
|
||||
for key, value in proc_data[heading].items():
|
||||
if value is not None and value.startswith('"') and value.endswith('"'):
|
||||
proc_data[heading][key] = value.lstrip('"').rstrip('"')
|
||||
elif value is None:
|
||||
proc_data[heading][key] = ''
|
||||
|
||||
# simple key/value files with no headers
|
||||
else:
|
||||
if proc_data[heading] is not None and proc_data[heading].startswith('"') and proc_data[heading].endswith('"'):
|
||||
proc_data[heading] = proc_data[heading].lstrip('"').rstrip('"')
|
||||
elif proc_data[heading] is None:
|
||||
proc_data[heading] = ''
|
||||
|
||||
# No further processing
|
||||
return proc_data
|
||||
|
||||
|
||||
@@ -101,10 +121,19 @@ def parse(data, raw=False, quiet=False):
|
||||
|
||||
raw_output = {}
|
||||
|
||||
if data:
|
||||
ini = configparser.ConfigParser()
|
||||
ini.read_string(data)
|
||||
raw_output = {s: dict(ini.items(s)) for s in ini.sections()}
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
ini = configparser.ConfigParser(allow_no_value=True, interpolation=None)
|
||||
try:
|
||||
ini.read_string(data)
|
||||
raw_output = {s: dict(ini.items(s)) for s in ini.sections()}
|
||||
|
||||
except configparser.MissingSectionHeaderError:
|
||||
data = '[data]\n' + data
|
||||
ini.read_string(data)
|
||||
output_dict = {s: dict(ini.items(s)) for s in ini.sections()}
|
||||
for key, value in output_dict['data'].items():
|
||||
raw_output[key] = value
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -134,7 +134,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.2'
|
||||
version = '1.4'
|
||||
description = 'iptables command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -194,19 +194,19 @@ def process(proc_data):
|
||||
if 'bytes' in rule:
|
||||
multiplier = 1
|
||||
if rule['bytes'][-1] == 'K':
|
||||
multiplier = 1000
|
||||
multiplier = 10 ** 3
|
||||
rule['bytes'] = rule['bytes'].rstrip('K')
|
||||
elif rule['bytes'][-1] == 'M':
|
||||
multiplier = 1000000
|
||||
multiplier = 10 ** 6
|
||||
rule['bytes'] = rule['bytes'].rstrip('M')
|
||||
elif rule['bytes'][-1] == 'G':
|
||||
multiplier = 1000000000
|
||||
multiplier = 10 ** 9
|
||||
rule['bytes'] = rule['bytes'].rstrip('G')
|
||||
elif rule['bytes'][-1] == 'T':
|
||||
multiplier = 1000000000000
|
||||
multiplier = 10 ** 12
|
||||
rule['bytes'] = rule['bytes'].rstrip('T')
|
||||
elif rule['bytes'][-1] == 'P':
|
||||
multiplier = 1000000000000000
|
||||
multiplier = 10 ** 15
|
||||
rule['bytes'] = rule['bytes'].rstrip('P')
|
||||
|
||||
try:
|
||||
@@ -243,36 +243,39 @@ def parse(data, raw=False, quiet=False):
|
||||
chain = {}
|
||||
headers = []
|
||||
|
||||
cleandata = data.splitlines()
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
for line in cleandata:
|
||||
for line in list(filter(None, data.splitlines())):
|
||||
|
||||
if line.startswith('Chain'):
|
||||
if line.startswith('Chain'):
|
||||
if chain:
|
||||
raw_output.append(chain)
|
||||
|
||||
chain = {}
|
||||
headers = []
|
||||
|
||||
parsed_line = line.split()
|
||||
|
||||
chain['chain'] = parsed_line[1]
|
||||
chain['rules'] = []
|
||||
|
||||
continue
|
||||
|
||||
elif line.startswith('target') or line.find('pkts') == 1 or line.startswith('num'):
|
||||
headers = []
|
||||
headers = [h for h in ' '.join(line.lower().strip().split()).split() if h]
|
||||
headers.append("options")
|
||||
|
||||
continue
|
||||
|
||||
else:
|
||||
rule = line.split(maxsplit=len(headers) - 1)
|
||||
temp_rule = dict(zip(headers, rule))
|
||||
if temp_rule:
|
||||
chain['rules'].append(temp_rule)
|
||||
|
||||
if chain:
|
||||
raw_output.append(chain)
|
||||
chain = {}
|
||||
headers = []
|
||||
|
||||
parsed_line = line.split()
|
||||
|
||||
chain['chain'] = parsed_line[1]
|
||||
chain['rules'] = []
|
||||
|
||||
continue
|
||||
|
||||
elif line.startswith('target') or line.find('pkts') == 1 or line.startswith('num'):
|
||||
headers = []
|
||||
headers = [h for h in ' '.join(line.lower().strip().split()).split() if h]
|
||||
headers.append("options")
|
||||
|
||||
continue
|
||||
|
||||
else:
|
||||
rule = line.split(maxsplit=len(headers) - 1)
|
||||
temp_rule = dict(zip(headers, rule))
|
||||
if temp_rule:
|
||||
chain['rules'].append(temp_rule)
|
||||
|
||||
raw_output = list(filter(None, raw_output))
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -77,7 +77,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.1'
|
||||
version = '1.2'
|
||||
description = 'jobs command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -144,12 +144,10 @@ def parse(data, raw=False, quiet=False):
|
||||
|
||||
raw_output = []
|
||||
|
||||
linedata = data.splitlines()
|
||||
|
||||
# Clear any blank lines
|
||||
cleandata = list(filter(None, linedata))
|
||||
cleandata = list(filter(None, data.splitlines()))
|
||||
|
||||
if cleandata:
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
for entry in cleandata:
|
||||
output_line = {}
|
||||
|
||||
@@ -72,7 +72,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.2'
|
||||
version = '1.3'
|
||||
description = 'last and lastb command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -149,12 +149,12 @@ def parse(data, raw=False, quiet=False):
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
raw_output = []
|
||||
cleandata = data.splitlines()
|
||||
|
||||
# Clear any blank lines
|
||||
cleandata = list(filter(None, cleandata))
|
||||
cleandata = list(filter(None, data.splitlines()))
|
||||
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
if cleandata:
|
||||
for entry in cleandata:
|
||||
output_line = {}
|
||||
|
||||
|
||||
@@ -149,7 +149,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.4'
|
||||
version = '1.6'
|
||||
description = 'ls command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -226,20 +226,20 @@ def parse(data, raw=False, quiet=False):
|
||||
|
||||
linedata = data.splitlines()
|
||||
|
||||
# Delete first line if it starts with 'total 1234'
|
||||
if linedata:
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
# Delete first line if it starts with 'total 1234'
|
||||
if re.match(r'total [0-9]+', linedata[0]):
|
||||
linedata.pop(0)
|
||||
|
||||
# Look for parent line if glob or -R is used
|
||||
if not re.match(r'[-dclpsbDCMnP?]([-r][-w][-xsS]){2}([-r][-w][-xtT])[+]?', linedata[0]) \
|
||||
and linedata[0].endswith(':'):
|
||||
parent = linedata.pop(0)[:-1]
|
||||
# Pop following total line if it exists
|
||||
if re.match(r'total [0-9]+', linedata[0]):
|
||||
linedata.pop(0)
|
||||
# Look for parent line if glob or -R is used
|
||||
if not re.match(r'[-dclpsbDCMnP?]([-r][-w][-xsS]){2}([-r][-w][-xtT])[+]?', linedata[0]) \
|
||||
and linedata[0].endswith(':'):
|
||||
parent = linedata.pop(0)[:-1]
|
||||
# Pop following total line if it exists
|
||||
if re.match(r'total [0-9]+', linedata[0]):
|
||||
linedata.pop(0)
|
||||
|
||||
if linedata:
|
||||
# Check if -l was used to parse extra data
|
||||
if re.match(r'[-dclpsbDCMnP?]([-r][-w][-xsS]){2}([-r][-w][-xtT])[+]?', linedata[0]):
|
||||
for entry in linedata:
|
||||
|
||||
@@ -216,7 +216,7 @@ import jc.parsers.universal
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.3'
|
||||
version = '1.5'
|
||||
description = 'lsblk command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -327,20 +327,23 @@ def parse(data, raw=False, quiet=False):
|
||||
if not quiet:
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
linedata = data.splitlines()
|
||||
# Clear any blank lines
|
||||
cleandata = list(filter(None, linedata))
|
||||
cleandata = data.splitlines()
|
||||
cleandata = list(filter(None, data.splitlines()))
|
||||
raw_output = []
|
||||
|
||||
cleandata[0] = cleandata[0].lower()
|
||||
cleandata[0] = cleandata[0].replace(':', '_')
|
||||
cleandata[0] = cleandata[0].replace('-', '_')
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
raw_output = jc.parsers.universal.sparse_table_parse(cleandata)
|
||||
cleandata = data.splitlines()
|
||||
|
||||
# clean up non-ascii characters, if any
|
||||
for entry in raw_output:
|
||||
entry['name'] = entry['name'].encode('ascii', errors='ignore').decode()
|
||||
cleandata[0] = cleandata[0].lower()
|
||||
cleandata[0] = cleandata[0].replace(':', '_')
|
||||
cleandata[0] = cleandata[0].replace('-', '_')
|
||||
|
||||
raw_output = jc.parsers.universal.sparse_table_parse(cleandata)
|
||||
|
||||
# clean up non-ascii characters, if any
|
||||
for entry in raw_output:
|
||||
entry['name'] = entry['name'].encode('ascii', errors='ignore').decode()
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -107,7 +107,7 @@ import jc.parsers.universal
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.1'
|
||||
version = '1.3'
|
||||
description = 'lsmod command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -175,13 +175,17 @@ def parse(data, raw=False, quiet=False):
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
cleandata = data.splitlines()
|
||||
cleandata[0] = cleandata[0].lower()
|
||||
raw_output = []
|
||||
|
||||
raw_output = jc.parsers.universal.simple_table_parse(cleandata)
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
for mod in raw_output:
|
||||
if 'by' in mod:
|
||||
mod['by'] = mod['by'].split(',')
|
||||
cleandata[0] = cleandata[0].lower()
|
||||
|
||||
raw_output = jc.parsers.universal.simple_table_parse(cleandata)
|
||||
|
||||
for mod in raw_output:
|
||||
if 'by' in mod:
|
||||
mod['by'] = mod['by'].split(',')
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -97,7 +97,7 @@ import jc.parsers.universal
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.1'
|
||||
version = '1.2'
|
||||
description = 'lsof command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -169,12 +169,11 @@ def parse(data, raw=False, quiet=False):
|
||||
|
||||
raw_output = []
|
||||
|
||||
linedata = data.splitlines()
|
||||
|
||||
# Clear any blank lines
|
||||
cleandata = list(filter(None, linedata))
|
||||
cleandata = list(filter(None, data.splitlines()))
|
||||
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
if cleandata:
|
||||
cleandata[0] = cleandata[0].lower()
|
||||
cleandata[0] = cleandata[0].replace('/', '_')
|
||||
|
||||
|
||||
@@ -56,7 +56,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.3'
|
||||
version = '1.5'
|
||||
description = 'mount command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -158,12 +158,12 @@ def parse(data, raw=False, quiet=False):
|
||||
if not quiet:
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
linedata = data.splitlines()
|
||||
|
||||
# Clear any blank lines
|
||||
cleandata = list(filter(None, linedata))
|
||||
cleandata = list(filter(None, data.splitlines()))
|
||||
raw_output = []
|
||||
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
if cleandata:
|
||||
# check for OSX output
|
||||
if ' type ' not in cleandata[0]:
|
||||
raw_output = osx_parse(cleandata)
|
||||
|
||||
@@ -247,7 +247,7 @@ Examples:
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.6'
|
||||
version = '1.8'
|
||||
description = 'netstat command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -431,29 +431,30 @@ def parse(data, raw=False, quiet=False):
|
||||
if not quiet:
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
cleandata = data.splitlines()
|
||||
cleandata = list(filter(None, cleandata))
|
||||
cleandata = list(filter(None, data.splitlines()))
|
||||
raw_output = []
|
||||
|
||||
# check for FreeBSD/OSX vs Linux
|
||||
# is this from FreeBSD/OSX?
|
||||
if cleandata[0] == 'Active Internet connections' \
|
||||
or cleandata[0] == 'Active Internet connections (including servers)' \
|
||||
or cleandata[0] == 'Active Multipath Internet connections' \
|
||||
or cleandata[0] == 'Active LOCAL (UNIX) domain sockets' \
|
||||
or cleandata[0] == 'Registered kernel control modules' \
|
||||
or cleandata[0] == 'Active kernel event sockets' \
|
||||
or cleandata[0] == 'Active kernel control sockets' \
|
||||
or cleandata[0] == 'Routing tables' \
|
||||
or cleandata[0].startswith('Name '):
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
import jc.parsers.netstat_freebsd_osx
|
||||
raw_output = jc.parsers.netstat_freebsd_osx.parse(cleandata)
|
||||
# check for FreeBSD/OSX vs Linux
|
||||
# is this from FreeBSD/OSX?
|
||||
if cleandata[0] == 'Active Internet connections' \
|
||||
or cleandata[0] == 'Active Internet connections (including servers)' \
|
||||
or cleandata[0] == 'Active Multipath Internet connections' \
|
||||
or cleandata[0] == 'Active LOCAL (UNIX) domain sockets' \
|
||||
or cleandata[0] == 'Registered kernel control modules' \
|
||||
or cleandata[0] == 'Active kernel event sockets' \
|
||||
or cleandata[0] == 'Active kernel control sockets' \
|
||||
or cleandata[0] == 'Routing tables' \
|
||||
or cleandata[0].startswith('Name '):
|
||||
|
||||
# use linux parser
|
||||
else:
|
||||
import jc.parsers.netstat_linux
|
||||
raw_output = jc.parsers.netstat_linux.parse(cleandata)
|
||||
import jc.parsers.netstat_freebsd_osx
|
||||
raw_output = jc.parsers.netstat_freebsd_osx.parse(cleandata)
|
||||
|
||||
# use linux parser
|
||||
else:
|
||||
import jc.parsers.netstat_linux
|
||||
raw_output = jc.parsers.netstat_linux.parse(cleandata)
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -183,7 +183,7 @@ import jc.parsers.universal
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.1'
|
||||
version = '1.3'
|
||||
description = 'ntpq -p command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -268,28 +268,30 @@ def parse(data, raw=False, quiet=False):
|
||||
if not quiet:
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
cleandata = data.splitlines()
|
||||
raw_output = []
|
||||
|
||||
cleandata = data.splitlines()
|
||||
cleandata[0] = 's ' + cleandata[0]
|
||||
cleandata[0] = cleandata[0].lower()
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
# delete header delimiter
|
||||
del cleandata[1]
|
||||
cleandata[0] = 's ' + cleandata[0]
|
||||
cleandata[0] = cleandata[0].lower()
|
||||
|
||||
# separate first character with a space for easier parsing
|
||||
for i, line in list(enumerate(cleandata[1:])):
|
||||
if line[0] == ' ':
|
||||
# fixup for no-state
|
||||
cleandata[i + 1] = '~ ' + line[1:]
|
||||
else:
|
||||
# fixup - realign columns since we added the 's' column
|
||||
cleandata[i + 1] = line[:1] + ' ' + line[1:]
|
||||
# delete header delimiter
|
||||
del cleandata[1]
|
||||
|
||||
# fixup for occaisional ip/hostname fields with a space
|
||||
cleandata[i + 1] = cleandata[i + 1].replace(' (', '_(')
|
||||
# separate first character with a space for easier parsing
|
||||
for i, line in list(enumerate(cleandata[1:])):
|
||||
if line[0] == ' ':
|
||||
# fixup for no-state
|
||||
cleandata[i + 1] = '~ ' + line[1:]
|
||||
else:
|
||||
# fixup - realign columns since we added the 's' column
|
||||
cleandata[i + 1] = line[:1] + ' ' + line[1:]
|
||||
|
||||
raw_output = jc.parsers.universal.simple_table_parse(cleandata)
|
||||
# fixup for occaisional ip/hostname fields with a space
|
||||
cleandata[i + 1] = cleandata[i + 1].replace(' (', '_(')
|
||||
|
||||
raw_output = jc.parsers.universal.simple_table_parse(cleandata)
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -78,7 +78,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.0'
|
||||
version = '1.1'
|
||||
description = '/etc/passwd file parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -146,12 +146,12 @@ def parse(data, raw=False, quiet=False):
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
raw_output = []
|
||||
cleandata = data.splitlines()
|
||||
|
||||
# Clear any blank lines
|
||||
cleandata = list(filter(None, cleandata))
|
||||
cleandata = list(filter(None, data.splitlines()))
|
||||
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
if cleandata:
|
||||
for entry in cleandata:
|
||||
if entry.startswith('#'):
|
||||
continue
|
||||
|
||||
507
jc/parsers/ping.py
Normal file
507
jc/parsers/ping.py
Normal file
@@ -0,0 +1,507 @@
|
||||
"""jc - JSON CLI output utility ping Parser
|
||||
|
||||
Usage:
|
||||
|
||||
specify --ping as the first argument if the piped input is coming from ping
|
||||
|
||||
Note: Use the ping -c (count) option, otherwise data will not be piped to jc.
|
||||
|
||||
Compatibility:
|
||||
|
||||
'linux', 'darwin', 'freebsd'
|
||||
|
||||
Examples:
|
||||
|
||||
$ ping -c 3 -p ff cnn.com | jc --ping -p
|
||||
{
|
||||
"destination_ip": "151.101.1.67",
|
||||
"data_bytes": 56,
|
||||
"pattern": "0xff",
|
||||
"destination": "cnn.com",
|
||||
"packets_transmitted": 3,
|
||||
"packets_received": 3,
|
||||
"packet_loss_percent": 0.0,
|
||||
"duplicates": 0,
|
||||
"round_trip_ms_min": 28.015,
|
||||
"round_trip_ms_avg": 32.848,
|
||||
"round_trip_ms_max": 39.376,
|
||||
"round_trip_ms_stddev": 4.79,
|
||||
"responses": [
|
||||
{
|
||||
"type": "reply",
|
||||
"bytes": 64,
|
||||
"response_ip": "151.101.1.67",
|
||||
"icmp_seq": 0,
|
||||
"ttl": 59,
|
||||
"time_ms": 28.015,
|
||||
"duplicate": false
|
||||
},
|
||||
{
|
||||
"type": "reply",
|
||||
"bytes": 64,
|
||||
"response_ip": "151.101.1.67",
|
||||
"icmp_seq": 1,
|
||||
"ttl": 59,
|
||||
"time_ms": 39.376,
|
||||
"duplicate": false
|
||||
},
|
||||
{
|
||||
"type": "reply",
|
||||
"bytes": 64,
|
||||
"response_ip": "151.101.1.67",
|
||||
"icmp_seq": 2,
|
||||
"ttl": 59,
|
||||
"time_ms": 31.153,
|
||||
"duplicate": false
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
$ ping -c 3 -p ff cnn.com | jc --ping -p -r
|
||||
{
|
||||
"destination_ip": "151.101.129.67",
|
||||
"data_bytes": "56",
|
||||
"pattern": "0xff",
|
||||
"destination": "cnn.com",
|
||||
"packets_transmitted": "3",
|
||||
"packets_received": "3",
|
||||
"packet_loss_percent": "0.0",
|
||||
"duplicates": "0",
|
||||
"round_trip_ms_min": "25.078",
|
||||
"round_trip_ms_avg": "29.543",
|
||||
"round_trip_ms_max": "32.553",
|
||||
"round_trip_ms_stddev": "3.221",
|
||||
"responses": [
|
||||
{
|
||||
"type": "reply",
|
||||
"bytes": "64",
|
||||
"response_ip": "151.101.129.67",
|
||||
"icmp_seq": "0",
|
||||
"ttl": "59",
|
||||
"time_ms": "25.078",
|
||||
"duplicate": false
|
||||
},
|
||||
{
|
||||
"type": "reply",
|
||||
"bytes": "64",
|
||||
"response_ip": "151.101.129.67",
|
||||
"icmp_seq": "1",
|
||||
"ttl": "59",
|
||||
"time_ms": "30.999",
|
||||
"duplicate": false
|
||||
},
|
||||
{
|
||||
"type": "reply",
|
||||
"bytes": "64",
|
||||
"response_ip": "151.101.129.67",
|
||||
"icmp_seq": "2",
|
||||
"ttl": "59",
|
||||
"time_ms": "32.553",
|
||||
"duplicate": false
|
||||
}
|
||||
]
|
||||
}
|
||||
"""
|
||||
import string
|
||||
import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.0'
|
||||
description = 'ping command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
|
||||
# compatible options: linux, darwin, cygwin, win32, aix, freebsd
|
||||
compatible = ['linux', 'darwin', 'freebsd']
|
||||
magic_commands = ['ping', 'ping6']
|
||||
|
||||
|
||||
__version__ = info.version
|
||||
|
||||
|
||||
def process(proc_data):
|
||||
"""
|
||||
Final processing to conform to the schema.
|
||||
|
||||
Parameters:
|
||||
|
||||
proc_data: (dictionary) raw structured data to process
|
||||
|
||||
Returns:
|
||||
|
||||
Dictionary. Structured data with the following schema:
|
||||
|
||||
{
|
||||
"source_ip": string,
|
||||
"destination_ip": string,
|
||||
"data_bytes": integer,
|
||||
"pattern": string, (null if not set)
|
||||
"destination": string,
|
||||
"packets_transmitted": integer,
|
||||
"packets_received": integer,
|
||||
"packet_loss_percent": float,
|
||||
"duplicates": integer,
|
||||
"round_trip_ms_min": float,
|
||||
"round_trip_ms_avg": float,
|
||||
"round_trip_ms_max": float,
|
||||
"round_trip_ms_stddev": float,
|
||||
"responses": [
|
||||
{
|
||||
"type": string, ('reply' or 'timeout')
|
||||
"timestamp": float,
|
||||
"bytes": integer,
|
||||
"response_ip": string,
|
||||
"icmp_seq": integer,
|
||||
"ttl": integer,
|
||||
"time_ms": float,
|
||||
"duplicate": boolean
|
||||
}
|
||||
]
|
||||
}
|
||||
"""
|
||||
int_list = ['data_bytes', 'packets_transmitted', 'packets_received', 'bytes', 'icmp_seq', 'ttl', 'duplicates']
|
||||
float_list = ['packet_loss_percent', 'round_trip_ms_min', 'round_trip_ms_avg', 'round_trip_ms_max',
|
||||
'round_trip_ms_stddev', 'timestamp', 'time_ms']
|
||||
|
||||
for key in proc_data.keys():
|
||||
for item in int_list:
|
||||
if item == key:
|
||||
try:
|
||||
proc_data[key] = int(proc_data[key])
|
||||
except (ValueError, TypeError):
|
||||
proc_data[key] = None
|
||||
|
||||
for item in float_list:
|
||||
if item == key:
|
||||
try:
|
||||
proc_data[key] = float(proc_data[key])
|
||||
except (ValueError, TypeError):
|
||||
proc_data[key] = None
|
||||
|
||||
if key == 'responses':
|
||||
for entry in proc_data['responses']:
|
||||
for k in entry.keys():
|
||||
if k in int_list:
|
||||
try:
|
||||
entry[k] = int(entry[k])
|
||||
except (ValueError, TypeError):
|
||||
entry[k] = None
|
||||
if k in float_list:
|
||||
try:
|
||||
entry[k] = float(entry[k])
|
||||
except (ValueError, TypeError):
|
||||
entry[k] = None
|
||||
|
||||
return proc_data
|
||||
|
||||
|
||||
def linux_parse(data):
|
||||
raw_output = {}
|
||||
ping_responses = []
|
||||
pattern = None
|
||||
footer = False
|
||||
|
||||
linedata = data.splitlines()
|
||||
|
||||
# check for PATTERN
|
||||
if linedata[0].startswith('PATTERN: '):
|
||||
pattern = linedata.pop(0).split(': ')[1]
|
||||
|
||||
while not linedata[0].startswith('PING '):
|
||||
linedata.pop(0)
|
||||
|
||||
ipv4 = True if 'bytes of data' in linedata[0] else False
|
||||
|
||||
if ipv4 and linedata[0][5] not in string.digits:
|
||||
hostname = True
|
||||
elif ipv4 and linedata[0][5] in string.digits:
|
||||
hostname = False
|
||||
elif not ipv4 and ' (' in linedata[0]:
|
||||
hostname = True
|
||||
else:
|
||||
hostname = False
|
||||
|
||||
for line in filter(None, linedata):
|
||||
if line.startswith('PING '):
|
||||
if ipv4 and not hostname:
|
||||
dst_ip, dta_byts = (2, 3)
|
||||
elif ipv4 and hostname:
|
||||
dst_ip, dta_byts = (2, 3)
|
||||
elif not ipv4 and not hostname:
|
||||
dst_ip, dta_byts = (2, 3)
|
||||
else:
|
||||
dst_ip, dta_byts = (3, 4)
|
||||
|
||||
line = line.replace('(', ' ').replace(')', ' ')
|
||||
raw_output.update(
|
||||
{
|
||||
'destination_ip': line.split()[dst_ip].lstrip('(').rstrip(')'),
|
||||
'data_bytes': line.split()[dta_byts],
|
||||
'pattern': pattern
|
||||
}
|
||||
)
|
||||
continue
|
||||
|
||||
if line.startswith('---'):
|
||||
footer = True
|
||||
raw_output['destination'] = line.split()[1]
|
||||
continue
|
||||
|
||||
if footer:
|
||||
if 'packets transmitted' in line:
|
||||
if ' duplicates,' in line:
|
||||
raw_output.update(
|
||||
{
|
||||
'packets_transmitted': line.split()[0],
|
||||
'packets_received': line.split()[3],
|
||||
'packet_loss_percent': line.split()[7].rstrip('%'),
|
||||
'duplicates': line.split()[5].lstrip('+'),
|
||||
'time_ms': line.split()[11].replace('ms', '')
|
||||
}
|
||||
)
|
||||
continue
|
||||
else:
|
||||
raw_output.update(
|
||||
{
|
||||
'packets_transmitted': line.split()[0],
|
||||
'packets_received': line.split()[3],
|
||||
'packet_loss_percent': line.split()[5].rstrip('%'),
|
||||
'duplicates': '0',
|
||||
'time_ms': line.split()[9].replace('ms', '')
|
||||
}
|
||||
)
|
||||
continue
|
||||
|
||||
else:
|
||||
split_line = line.split(' = ')[1]
|
||||
split_line = split_line.split('/')
|
||||
raw_output.update(
|
||||
{
|
||||
'round_trip_ms_min': split_line[0],
|
||||
'round_trip_ms_avg': split_line[1],
|
||||
'round_trip_ms_max': split_line[2],
|
||||
'round_trip_ms_stddev': split_line[3].split()[0]
|
||||
}
|
||||
)
|
||||
|
||||
# ping response lines
|
||||
else:
|
||||
# request timeout
|
||||
if 'no answer yet for icmp_seq=' in line:
|
||||
timestamp = False
|
||||
isequence = 5
|
||||
|
||||
# if timestamp option is specified, then shift icmp sequence field right by one
|
||||
if line[0] == '[':
|
||||
timestamp = True
|
||||
isequence = 6
|
||||
|
||||
response = {
|
||||
'type': 'timeout',
|
||||
'timestamp': line.split()[0].lstrip('[').rstrip(']') if timestamp else None,
|
||||
'icmp_seq': line.replace('=', ' ').split()[isequence]
|
||||
}
|
||||
ping_responses.append(response)
|
||||
continue
|
||||
|
||||
# normal responses
|
||||
else:
|
||||
|
||||
line = line.replace('(', ' ').replace(')', ' ').replace('=', ' ')
|
||||
|
||||
# positions of items depend on whether ipv4/ipv6 and/or ip/hostname is used
|
||||
if ipv4 and not hostname:
|
||||
bts, rip, iseq, t2l, tms = (0, 3, 5, 7, 9)
|
||||
elif ipv4 and hostname:
|
||||
bts, rip, iseq, t2l, tms = (0, 4, 7, 9, 11)
|
||||
elif not ipv4 and not hostname:
|
||||
bts, rip, iseq, t2l, tms = (0, 3, 5, 7, 9)
|
||||
elif not ipv4 and hostname:
|
||||
bts, rip, iseq, t2l, tms = (0, 4, 7, 9, 11)
|
||||
|
||||
# if timestamp option is specified, then shift everything right by one
|
||||
timestamp = False
|
||||
if line[0] == '[':
|
||||
timestamp = True
|
||||
bts, rip, iseq, t2l, tms = (bts + 1, rip + 1, iseq + 1, t2l + 1, tms + 1)
|
||||
|
||||
response = {
|
||||
'type': 'reply',
|
||||
'timestamp': line.split()[0].lstrip('[').rstrip(']') if timestamp else None,
|
||||
'bytes': line.split()[bts],
|
||||
'response_ip': line.split()[rip].rstrip(':'),
|
||||
'icmp_seq': line.split()[iseq],
|
||||
'ttl': line.split()[t2l],
|
||||
'time_ms': line.split()[tms],
|
||||
'duplicate': True if 'DUP!' in line else False
|
||||
}
|
||||
|
||||
ping_responses.append(response)
|
||||
continue
|
||||
|
||||
raw_output['responses'] = ping_responses
|
||||
|
||||
return raw_output
|
||||
|
||||
|
||||
def bsd_parse(data):
|
||||
raw_output = {}
|
||||
ping_responses = []
|
||||
pattern = None
|
||||
footer = False
|
||||
|
||||
linedata = data.splitlines()
|
||||
|
||||
# check for PATTERN
|
||||
if linedata[0].startswith('PATTERN: '):
|
||||
pattern = linedata.pop(0).split(': ')[1]
|
||||
|
||||
for line in filter(None, linedata):
|
||||
if line.startswith('PING '):
|
||||
raw_output.update(
|
||||
{
|
||||
'destination_ip': line.split()[2].lstrip('(').rstrip(':').rstrip(')'),
|
||||
'data_bytes': line.split()[3],
|
||||
'pattern': pattern
|
||||
}
|
||||
)
|
||||
continue
|
||||
|
||||
if line.startswith('PING6('):
|
||||
line = line.replace('(', ' ').replace(')', ' ').replace('=', ' ')
|
||||
raw_output.update(
|
||||
{
|
||||
'source_ip': line.split()[4],
|
||||
'destination_ip': line.split()[6],
|
||||
'data_bytes': line.split()[1],
|
||||
'pattern': pattern
|
||||
}
|
||||
)
|
||||
continue
|
||||
|
||||
if line.startswith('---'):
|
||||
footer = True
|
||||
raw_output['destination'] = line.split()[1]
|
||||
continue
|
||||
|
||||
if footer:
|
||||
if 'packets transmitted' in line:
|
||||
if ' duplicates,' in line:
|
||||
raw_output.update(
|
||||
{
|
||||
'packets_transmitted': line.split()[0],
|
||||
'packets_received': line.split()[3],
|
||||
'packet_loss_percent': line.split()[8].rstrip('%'),
|
||||
'duplicates': line.split()[6].lstrip('+'),
|
||||
}
|
||||
)
|
||||
continue
|
||||
else:
|
||||
raw_output.update(
|
||||
{
|
||||
'packets_transmitted': line.split()[0],
|
||||
'packets_received': line.split()[3],
|
||||
'packet_loss_percent': line.split()[6].rstrip('%'),
|
||||
'duplicates': '0',
|
||||
}
|
||||
)
|
||||
continue
|
||||
|
||||
else:
|
||||
split_line = line.split(' = ')[1]
|
||||
split_line = split_line.split('/')
|
||||
raw_output.update(
|
||||
{
|
||||
'round_trip_ms_min': split_line[0],
|
||||
'round_trip_ms_avg': split_line[1],
|
||||
'round_trip_ms_max': split_line[2],
|
||||
'round_trip_ms_stddev': split_line[3].replace(' ms', '')
|
||||
}
|
||||
)
|
||||
|
||||
# ping response lines
|
||||
else:
|
||||
# ipv4 lines
|
||||
if ',' not in line:
|
||||
|
||||
# request timeout
|
||||
if line.startswith('Request timeout for '):
|
||||
response = {
|
||||
'type': 'timeout',
|
||||
'icmp_seq': line.split()[4]
|
||||
}
|
||||
ping_responses.append(response)
|
||||
continue
|
||||
|
||||
# normal response
|
||||
else:
|
||||
line = line.replace(':', ' ').replace('=', ' ')
|
||||
response = {
|
||||
'type': 'reply',
|
||||
'bytes': line.split()[0],
|
||||
'response_ip': line.split()[3],
|
||||
'icmp_seq': line.split()[5],
|
||||
'ttl': line.split()[7],
|
||||
'time_ms': line.split()[9]
|
||||
}
|
||||
ping_responses.append(response)
|
||||
continue
|
||||
|
||||
# ipv6 lines
|
||||
else:
|
||||
line = line.replace(',', ' ').replace('=', ' ')
|
||||
response = {
|
||||
'type': 'reply',
|
||||
'bytes': line.split()[0],
|
||||
'response_ip': line.split()[3],
|
||||
'icmp_seq': line.split()[5],
|
||||
'ttl': line.split()[7],
|
||||
'time_ms': line.split()[9]
|
||||
}
|
||||
ping_responses.append(response)
|
||||
continue
|
||||
|
||||
# identify duplicates in responses
|
||||
if ping_responses:
|
||||
seq_list = []
|
||||
for reply in ping_responses:
|
||||
seq_list.append(reply['icmp_seq'])
|
||||
reply['duplicate'] = True if seq_list.count(reply['icmp_seq']) > 1 else False
|
||||
|
||||
raw_output['responses'] = ping_responses
|
||||
|
||||
return raw_output
|
||||
|
||||
|
||||
def parse(data, raw=False, quiet=False):
|
||||
"""
|
||||
Main text parsing function
|
||||
|
||||
Parameters:
|
||||
|
||||
data: (string) text data to parse
|
||||
raw: (boolean) output preprocessed JSON if True
|
||||
quiet: (boolean) suppress warning messages if True
|
||||
|
||||
Returns:
|
||||
|
||||
Dictionary. Raw or processed structured data.
|
||||
"""
|
||||
if not quiet:
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
raw_output = {}
|
||||
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
if 'time' in data.splitlines()[-2]:
|
||||
raw_output = linux_parse(data)
|
||||
else:
|
||||
raw_output = bsd_parse(data)
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
else:
|
||||
return process(raw_output)
|
||||
@@ -32,7 +32,7 @@ import jc.parsers.universal
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.1'
|
||||
version = '1.3'
|
||||
description = 'pip list command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -88,28 +88,28 @@ def parse(data, raw=False, quiet=False):
|
||||
|
||||
raw_output = []
|
||||
|
||||
linedata = data.splitlines()
|
||||
|
||||
# Clear any blank lines
|
||||
cleandata = list(filter(None, linedata))
|
||||
cleandata = list(filter(None, data.splitlines()))
|
||||
|
||||
# detect legacy output type
|
||||
if ' (' in cleandata[0]:
|
||||
for row in cleandata:
|
||||
raw_output.append({'package': row.split(' (')[0],
|
||||
'version': row.split(' (')[1].rstrip(')')})
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
# otherwise normal table output
|
||||
else:
|
||||
# clear separator line
|
||||
for i, line in reversed(list(enumerate(cleandata))):
|
||||
if '---' in line:
|
||||
cleandata.pop(i)
|
||||
# detect legacy output type
|
||||
if ' (' in cleandata[0]:
|
||||
for row in cleandata:
|
||||
raw_output.append({'package': row.split(' (')[0],
|
||||
'version': row.split(' (')[1].rstrip(')')})
|
||||
|
||||
cleandata[0] = cleandata[0].lower()
|
||||
# otherwise normal table output
|
||||
else:
|
||||
# clear separator line
|
||||
for i, line in reversed(list(enumerate(cleandata))):
|
||||
if '---' in line:
|
||||
cleandata.pop(i)
|
||||
|
||||
if cleandata:
|
||||
raw_output = jc.parsers.universal.simple_table_parse(cleandata)
|
||||
cleandata[0] = cleandata[0].lower()
|
||||
|
||||
if cleandata:
|
||||
raw_output = jc.parsers.universal.simple_table_parse(cleandata)
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -42,7 +42,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.0'
|
||||
version = '1.1'
|
||||
description = 'pip show command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -107,12 +107,11 @@ def parse(data, raw=False, quiet=False):
|
||||
raw_output = []
|
||||
package = {}
|
||||
|
||||
linedata = data.splitlines()
|
||||
|
||||
# Clear any blank lines
|
||||
cleandata = list(filter(None, linedata))
|
||||
cleandata = list(filter(None, data.splitlines()))
|
||||
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
if cleandata:
|
||||
for row in cleandata:
|
||||
if row.startswith('---'):
|
||||
raw_output.append(package)
|
||||
|
||||
@@ -177,7 +177,7 @@ import jc.parsers.universal
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.1'
|
||||
version = '1.3'
|
||||
description = 'ps command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -282,9 +282,12 @@ def parse(data, raw=False, quiet=False):
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
cleandata = data.splitlines()
|
||||
cleandata[0] = cleandata[0].lower()
|
||||
raw_output = []
|
||||
|
||||
raw_output = jc.parsers.universal.simple_table_parse(cleandata)
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
cleandata[0] = cleandata[0].lower()
|
||||
raw_output = jc.parsers.universal.simple_table_parse(cleandata)
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -84,7 +84,7 @@ import jc.parsers.universal
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.1'
|
||||
version = '1.4'
|
||||
description = 'route command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -182,9 +182,18 @@ def parse(data, raw=False, quiet=False):
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
cleandata = data.splitlines()[1:]
|
||||
cleandata[0] = cleandata[0].lower()
|
||||
|
||||
raw_output = jc.parsers.universal.simple_table_parse(cleandata)
|
||||
raw_output = []
|
||||
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
# fixup header row for ipv6
|
||||
if ' Next Hop ' in cleandata[0]:
|
||||
cleandata[0] = cleandata[0].replace(' If', ' Iface')
|
||||
cleandata[0] = cleandata[0].replace(' Next Hop ', ' Next_Hop ').replace(' Flag ', ' Flags ').replace(' Met ', ' Metric ')
|
||||
|
||||
cleandata[0] = cleandata[0].lower()
|
||||
raw_output = jc.parsers.universal.simple_table_parse(cleandata)
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -84,7 +84,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.0'
|
||||
version = '1.1'
|
||||
description = '/etc/shadow file parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -153,12 +153,12 @@ def parse(data, raw=False, quiet=False):
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
raw_output = []
|
||||
cleandata = data.splitlines()
|
||||
|
||||
# Clear any blank lines
|
||||
cleandata = list(filter(None, cleandata))
|
||||
cleandata = list(filter(None, data.splitlines()))
|
||||
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
if cleandata:
|
||||
for entry in cleandata:
|
||||
if entry.startswith('#'):
|
||||
continue
|
||||
|
||||
@@ -251,7 +251,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.0'
|
||||
version = '1.2'
|
||||
description = 'ss command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -308,17 +308,17 @@ def process(proc_data):
|
||||
except (ValueError):
|
||||
entry[key] = None
|
||||
|
||||
if 'local_port' in entry:
|
||||
if 'local_port' in entry:
|
||||
try:
|
||||
entry['local_port_num'] = int(entry['local_port'])
|
||||
except (ValueError):
|
||||
pass
|
||||
|
||||
if 'peer_port' in entry:
|
||||
try:
|
||||
entry['peer_port_num'] = int(entry['peer_port'])
|
||||
except (ValueError):
|
||||
pass
|
||||
if 'peer_port' in entry:
|
||||
try:
|
||||
entry['peer_port_num'] = int(entry['peer_port'])
|
||||
except (ValueError):
|
||||
pass
|
||||
|
||||
return proc_data
|
||||
|
||||
@@ -342,12 +342,12 @@ def parse(data, raw=False, quiet=False):
|
||||
|
||||
contains_colon = ['nl', 'p_raw', 'raw', 'udp', 'tcp', 'v_str', 'icmp6']
|
||||
raw_output = []
|
||||
cleandata = data.splitlines()
|
||||
|
||||
# Clear any blank lines
|
||||
cleandata = list(filter(None, cleandata))
|
||||
cleandata = list(filter(None, data.splitlines()))
|
||||
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
if cleandata:
|
||||
header_text = cleandata[0].lower()
|
||||
header_text = header_text.replace('netidstate', 'netid state')
|
||||
header_text = header_text.replace('local address:port', 'local_address local_port')
|
||||
|
||||
@@ -105,7 +105,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.4'
|
||||
version = '1.5'
|
||||
description = 'stat command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -197,12 +197,11 @@ def parse(data, raw=False, quiet=False):
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
raw_output = []
|
||||
cleandata = data.splitlines()
|
||||
|
||||
# Clear any blank lines
|
||||
cleandata = list(filter(None, cleandata))
|
||||
cleandata = list(filter(None, data.splitlines()))
|
||||
|
||||
if cleandata:
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
# linux output
|
||||
if cleandata[0].startswith(' File: '):
|
||||
|
||||
154
jc/parsers/sysctl.py
Normal file
154
jc/parsers/sysctl.py
Normal file
@@ -0,0 +1,154 @@
|
||||
"""jc - JSON CLI output utility sysctl -a Parser
|
||||
|
||||
Usage:
|
||||
|
||||
specify --sysctl as the first argument if the piped input is coming from sysctl -a
|
||||
|
||||
Note: since sysctl output is not easily parsable only a very simple key/value object
|
||||
will be output. An attempt is made to convert obvious integers and floats. If no
|
||||
conversion is desired, use the -r (raw) option.
|
||||
|
||||
Compatibility:
|
||||
|
||||
'linux', 'darwin', 'freebsd'
|
||||
|
||||
Examples:
|
||||
|
||||
$ sysctl | jc --sysctl -p
|
||||
{
|
||||
"user.cs_path": "/usr/bin:/bin:/usr/sbin:/sbin",
|
||||
"user.bc_base_max": 99,
|
||||
"user.bc_dim_max": 2048,
|
||||
"user.bc_scale_max": 99,
|
||||
"user.bc_string_max": 1000,
|
||||
"user.coll_weights_max": 2,
|
||||
"user.expr_nest_max": 32
|
||||
...
|
||||
}
|
||||
|
||||
$ sysctl | jc --sysctl -p -r
|
||||
{
|
||||
"user.cs_path": "/usr/bin:/bin:/usr/sbin:/sbin",
|
||||
"user.bc_base_max": "99",
|
||||
"user.bc_dim_max": "2048",
|
||||
"user.bc_scale_max": "99",
|
||||
"user.bc_string_max": "1000",
|
||||
"user.coll_weights_max": "2",
|
||||
"user.expr_nest_max": "32",
|
||||
...
|
||||
}
|
||||
"""
|
||||
import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.0'
|
||||
description = 'sysctl command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
# details = 'enter any other details here'
|
||||
|
||||
# compatible options: linux, darwin, cygwin, win32, aix, freebsd
|
||||
compatible = ['linux', 'darwin', 'freebsd']
|
||||
magic_commands = ['sysctl']
|
||||
|
||||
|
||||
__version__ = info.version
|
||||
|
||||
|
||||
def process(proc_data):
|
||||
"""
|
||||
Final processing to conform to the schema.
|
||||
|
||||
Parameters:
|
||||
|
||||
proc_data: (dictionary) raw structured data to process
|
||||
|
||||
Returns:
|
||||
|
||||
Dictionary. Structured data with the following schema:
|
||||
|
||||
{
|
||||
"foo": string/integer/float, # best guess based on value
|
||||
"bar": string/integer/float,
|
||||
"baz": string/integer/float
|
||||
}
|
||||
"""
|
||||
for key in proc_data:
|
||||
try:
|
||||
proc_data[key] = int(proc_data[key])
|
||||
except (ValueError):
|
||||
try:
|
||||
proc_data[key] = float(proc_data[key])
|
||||
except (ValueError):
|
||||
pass
|
||||
return proc_data
|
||||
|
||||
|
||||
def parse(data, raw=False, quiet=False):
|
||||
"""
|
||||
Main text parsing function
|
||||
|
||||
Parameters:
|
||||
|
||||
data: (string) text data to parse
|
||||
raw: (boolean) output preprocessed JSON if True
|
||||
quiet: (boolean) suppress warning messages if True
|
||||
|
||||
Returns:
|
||||
|
||||
Dictionary. Raw or processed structured data.
|
||||
"""
|
||||
if not quiet:
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
raw_output = {}
|
||||
|
||||
if jc.utils.has_data(data):
|
||||
data = data.splitlines()
|
||||
|
||||
# linux uses = and bsd uses :
|
||||
if ' = ' in data[0]:
|
||||
delim = ' = '
|
||||
else:
|
||||
delim = ': '
|
||||
|
||||
for line in data:
|
||||
linedata = line.split(delim, maxsplit=1)
|
||||
|
||||
# bsd adds values to newlines, which need to be fixed up with this try/except block
|
||||
try:
|
||||
key = linedata[0]
|
||||
value = linedata[1]
|
||||
|
||||
# syctl -a repeats some keys on linux. Append values from repeating keys
|
||||
# to the previous key value
|
||||
if key in raw_output:
|
||||
existing_value = raw_output[key]
|
||||
raw_output[key] = existing_value + '\n' + value
|
||||
continue
|
||||
|
||||
# fix for weird multiline output in bsd
|
||||
# if the key looks strange (has spaces or no dots) then it's probably a value field
|
||||
# on a separate line. in this case, just append it to the previous key in the dictionary.
|
||||
if '.' not in key or ' ' in key:
|
||||
previous_key = [*raw_output.keys()][-1]
|
||||
raw_output[previous_key] = raw_output[previous_key] + '\n' + line
|
||||
continue
|
||||
|
||||
# if the key looks normal then just add to the dictionary as normal
|
||||
else:
|
||||
raw_output[key] = value
|
||||
continue
|
||||
|
||||
# if there is an IndexError exception, then there was no delimiter in the line.
|
||||
# In this case just append the data line as a value to the previous key.
|
||||
except IndexError:
|
||||
prior_key = [*raw_output.keys()][-1]
|
||||
raw_output[prior_key] = raw_output[prior_key] + '\n' + line
|
||||
continue
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
else:
|
||||
return process(raw_output)
|
||||
@@ -40,7 +40,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.1'
|
||||
version = '1.3'
|
||||
description = 'systemctl command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -96,27 +96,30 @@ def parse(data, raw=False, quiet=False):
|
||||
if not quiet:
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
linedata = data.splitlines()
|
||||
# Clear any blank lines
|
||||
linedata = list(filter(None, linedata))
|
||||
# clean up non-ascii characters, if any
|
||||
cleandata = []
|
||||
for entry in linedata:
|
||||
cleandata.append(entry.encode('ascii', errors='ignore').decode())
|
||||
|
||||
header_text = cleandata[0]
|
||||
header_list = header_text.lower().split()
|
||||
|
||||
linedata = list(filter(None, data.splitlines()))
|
||||
raw_output = []
|
||||
|
||||
for entry in cleandata[1:]:
|
||||
if 'LOAD = ' in entry:
|
||||
break
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
else:
|
||||
entry_list = entry.rstrip().split(maxsplit=4)
|
||||
output_line = dict(zip(header_list, entry_list))
|
||||
raw_output.append(output_line)
|
||||
# clean up non-ascii characters, if any
|
||||
cleandata = []
|
||||
for entry in linedata:
|
||||
cleandata.append(entry.encode('ascii', errors='ignore').decode())
|
||||
|
||||
header_text = cleandata[0]
|
||||
header_list = header_text.lower().split()
|
||||
|
||||
raw_output = []
|
||||
|
||||
for entry in cleandata[1:]:
|
||||
if 'LOAD = ' in entry:
|
||||
break
|
||||
|
||||
else:
|
||||
entry_list = entry.rstrip().split(maxsplit=4)
|
||||
output_line = dict(zip(header_list, entry_list))
|
||||
raw_output.append(output_line)
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -59,7 +59,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.1'
|
||||
version = '1.3'
|
||||
description = 'systemctl list-jobs command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -122,28 +122,32 @@ def parse(data, raw=False, quiet=False):
|
||||
if not quiet:
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
linedata = data.splitlines()
|
||||
# Clear any blank lines
|
||||
linedata = list(filter(None, linedata))
|
||||
# clean up non-ascii characters, if any
|
||||
cleandata = []
|
||||
for entry in linedata:
|
||||
cleandata.append(entry.encode('ascii', errors='ignore').decode())
|
||||
|
||||
header_text = cleandata[0]
|
||||
header_text = header_text.lower()
|
||||
header_list = header_text.split()
|
||||
|
||||
linedata = list(filter(None, data.splitlines()))
|
||||
raw_output = []
|
||||
|
||||
for entry in cleandata[1:]:
|
||||
if 'No jobs running.' in entry or 'jobs listed.' in entry:
|
||||
break
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
else:
|
||||
entry_list = entry.split(maxsplit=4)
|
||||
output_line = dict(zip(header_list, entry_list))
|
||||
raw_output.append(output_line)
|
||||
cleandata = []
|
||||
|
||||
# clean up non-ascii characters, if any
|
||||
for entry in linedata:
|
||||
cleandata.append(entry.encode('ascii', errors='ignore').decode())
|
||||
|
||||
header_text = cleandata[0]
|
||||
header_text = header_text.lower()
|
||||
header_list = header_text.split()
|
||||
|
||||
raw_output = []
|
||||
|
||||
for entry in cleandata[1:]:
|
||||
if 'No jobs running.' in entry or 'jobs listed.' in entry:
|
||||
break
|
||||
|
||||
else:
|
||||
entry_list = entry.split(maxsplit=4)
|
||||
output_line = dict(zip(header_list, entry_list))
|
||||
raw_output.append(output_line)
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -34,7 +34,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.1'
|
||||
version = '1.3'
|
||||
description = 'systemctl list-sockets command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -88,27 +88,30 @@ def parse(data, raw=False, quiet=False):
|
||||
if not quiet:
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
linedata = data.splitlines()
|
||||
# Clear any blank lines
|
||||
linedata = list(filter(None, linedata))
|
||||
# clean up non-ascii characters, if any
|
||||
cleandata = []
|
||||
for entry in linedata:
|
||||
cleandata.append(entry.encode('ascii', errors='ignore').decode())
|
||||
|
||||
header_text = cleandata[0].lower()
|
||||
header_list = header_text.split()
|
||||
|
||||
linedata = list(filter(None, data.splitlines()))
|
||||
raw_output = []
|
||||
|
||||
for entry in cleandata[1:]:
|
||||
if 'sockets listed.' in entry:
|
||||
break
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
else:
|
||||
entry_list = entry.rsplit(maxsplit=2)
|
||||
output_line = dict(zip(header_list, entry_list))
|
||||
raw_output.append(output_line)
|
||||
cleandata = []
|
||||
# clean up non-ascii characters, if any
|
||||
for entry in linedata:
|
||||
cleandata.append(entry.encode('ascii', errors='ignore').decode())
|
||||
|
||||
header_text = cleandata[0].lower()
|
||||
header_list = header_text.split()
|
||||
|
||||
raw_output = []
|
||||
|
||||
for entry in cleandata[1:]:
|
||||
if 'sockets listed.' in entry:
|
||||
break
|
||||
|
||||
else:
|
||||
entry_list = entry.rsplit(maxsplit=2)
|
||||
output_line = dict(zip(header_list, entry_list))
|
||||
raw_output.append(output_line)
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -31,7 +31,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.1'
|
||||
version = '1.3'
|
||||
description = 'systemctl list-unit-files command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -84,28 +84,31 @@ def parse(data, raw=False, quiet=False):
|
||||
if not quiet:
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
linedata = data.splitlines()
|
||||
# Clear any blank lines
|
||||
linedata = list(filter(None, linedata))
|
||||
# clean up non-ascii characters, if any
|
||||
cleandata = []
|
||||
for entry in linedata:
|
||||
cleandata.append(entry.encode('ascii', errors='ignore').decode())
|
||||
|
||||
header_text = cleandata[0]
|
||||
header_text = header_text.lower().replace('unit file', 'unit_file')
|
||||
header_list = header_text.split()
|
||||
|
||||
linedata = list(filter(None, data.splitlines()))
|
||||
raw_output = []
|
||||
|
||||
for entry in cleandata[1:]:
|
||||
if 'unit files listed.' in entry:
|
||||
break
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
else:
|
||||
entry_list = entry.split(maxsplit=4)
|
||||
output_line = dict(zip(header_list, entry_list))
|
||||
raw_output.append(output_line)
|
||||
cleandata = []
|
||||
# clean up non-ascii characters, if any
|
||||
for entry in linedata:
|
||||
cleandata.append(entry.encode('ascii', errors='ignore').decode())
|
||||
|
||||
header_text = cleandata[0]
|
||||
header_text = header_text.lower().replace('unit file', 'unit_file')
|
||||
header_list = header_text.split()
|
||||
|
||||
raw_output = []
|
||||
|
||||
for entry in cleandata[1:]:
|
||||
if 'unit files listed.' in entry:
|
||||
break
|
||||
|
||||
else:
|
||||
entry_list = entry.split(maxsplit=4)
|
||||
output_line = dict(zip(header_list, entry_list))
|
||||
raw_output.append(output_line)
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -38,7 +38,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.0'
|
||||
version = '1.1'
|
||||
description = 'timedatectl status command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -109,12 +109,14 @@ def parse(data, raw=False, quiet=False):
|
||||
|
||||
raw_output = {}
|
||||
|
||||
for line in filter(None, data.splitlines()):
|
||||
linedata = line.split(':', maxsplit=1)
|
||||
raw_output[linedata[0].strip().lower().replace(' ', '_')] = linedata[1].strip()
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
if linedata[0].strip() == 'DST active':
|
||||
break
|
||||
for line in filter(None, data.splitlines()):
|
||||
linedata = line.split(':', maxsplit=1)
|
||||
raw_output[linedata[0].strip().lower().replace(' ', '_')] = linedata[1].strip()
|
||||
|
||||
if linedata[0].strip() == 'DST active':
|
||||
break
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
251
jc/parsers/tracepath.py
Normal file
251
jc/parsers/tracepath.py
Normal file
@@ -0,0 +1,251 @@
|
||||
"""jc - JSON CLI output utility tracepath Parser
|
||||
|
||||
Usage:
|
||||
|
||||
specify --tracepath as the first argument if the piped input is coming from tracepath
|
||||
|
||||
Compatibility:
|
||||
|
||||
'linux'
|
||||
|
||||
Examples:
|
||||
|
||||
$ tracepath6 3ffe:2400:0:109::2 | jc --tracepath -p
|
||||
{
|
||||
"pmtu": 1480,
|
||||
"forward_hops": 2,
|
||||
"return_hops": 2,
|
||||
"hops": [
|
||||
{
|
||||
"ttl": 1,
|
||||
"guess": true,
|
||||
"host": "[LOCALHOST]",
|
||||
"reply_ms": null,
|
||||
"pmtu": 1500,
|
||||
"asymmetric_difference": null,
|
||||
"reached": false
|
||||
},
|
||||
{
|
||||
"ttl": 1,
|
||||
"guess": false,
|
||||
"host": "dust.inr.ac.ru",
|
||||
"reply_ms": 0.411,
|
||||
"pmtu": null,
|
||||
"asymmetric_difference": null,
|
||||
"reached": false
|
||||
},
|
||||
{
|
||||
"ttl": 2,
|
||||
"guess": false,
|
||||
"host": "dust.inr.ac.ru",
|
||||
"reply_ms": 0.39,
|
||||
"pmtu": 1480,
|
||||
"asymmetric_difference": 1,
|
||||
"reached": false
|
||||
},
|
||||
{
|
||||
"ttl": 2,
|
||||
"guess": false,
|
||||
"host": "3ffe:2400:0:109::2",
|
||||
"reply_ms": 463.514,
|
||||
"pmtu": null,
|
||||
"asymmetric_difference": null,
|
||||
"reached": true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
$ tracepath6 3ffe:2400:0:109::2 | jc --tracepath -p -r
|
||||
{
|
||||
"pmtu": "1480",
|
||||
"forward_hops": "2",
|
||||
"return_hops": "2",
|
||||
"hops": [
|
||||
{
|
||||
"ttl": "1",
|
||||
"guess": true,
|
||||
"host": "[LOCALHOST]",
|
||||
"reply_ms": null,
|
||||
"pmtu": "1500",
|
||||
"asymmetric_difference": null,
|
||||
"reached": false
|
||||
},
|
||||
{
|
||||
"ttl": "1",
|
||||
"guess": false,
|
||||
"host": "dust.inr.ac.ru",
|
||||
"reply_ms": "0.411",
|
||||
"pmtu": null,
|
||||
"asymmetric_difference": null,
|
||||
"reached": false
|
||||
},
|
||||
{
|
||||
"ttl": "2",
|
||||
"guess": false,
|
||||
"host": "dust.inr.ac.ru",
|
||||
"reply_ms": "0.390",
|
||||
"pmtu": "1480",
|
||||
"asymmetric_difference": "1",
|
||||
"reached": false
|
||||
},
|
||||
{
|
||||
"ttl": "2",
|
||||
"guess": false,
|
||||
"host": "3ffe:2400:0:109::2",
|
||||
"reply_ms": "463.514",
|
||||
"pmtu": null,
|
||||
"asymmetric_difference": null,
|
||||
"reached": true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
"""
|
||||
import re
|
||||
import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.0'
|
||||
description = 'tracepath command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
|
||||
# compatible options: linux, darwin, cygwin, win32, aix, freebsd
|
||||
compatible = ['linux']
|
||||
magic_commands = ['tracepath', 'tracepath6']
|
||||
|
||||
|
||||
__version__ = info.version
|
||||
|
||||
|
||||
def process(proc_data):
|
||||
"""
|
||||
Final processing to conform to the schema.
|
||||
|
||||
Parameters:
|
||||
|
||||
proc_data: (dictionary) raw structured data to process
|
||||
|
||||
Returns:
|
||||
|
||||
Dictionary. Structured data with the following schema:
|
||||
|
||||
{
|
||||
"pmtu": integer,
|
||||
"forward_hops": integer,
|
||||
"return_hops": integer,
|
||||
"hops": [
|
||||
{
|
||||
"ttl": integer,
|
||||
"guess": boolean,
|
||||
"host": string,
|
||||
"reply_ms": float,
|
||||
"pmtu": integer,
|
||||
"asymmetric_difference": integer,
|
||||
"reached": boolean
|
||||
}
|
||||
]
|
||||
}
|
||||
"""
|
||||
int_list = ['pmtu', 'forward_hops', 'return_hops', 'ttl', 'asymmetric_difference']
|
||||
float_list = ['reply_ms']
|
||||
|
||||
for key, value in proc_data.items():
|
||||
for item in int_list:
|
||||
if key in int_list:
|
||||
try:
|
||||
proc_data[key] = int(proc_data[key])
|
||||
except (ValueError, TypeError):
|
||||
proc_data[key] = None
|
||||
|
||||
for item in int_list:
|
||||
if key in float_list:
|
||||
try:
|
||||
proc_data[key] = float(proc_data[key])
|
||||
except (ValueError, TypeError):
|
||||
proc_data[key] = None
|
||||
|
||||
if 'hops' in proc_data:
|
||||
for entry in proc_data['hops']:
|
||||
for key in int_list:
|
||||
if key in entry:
|
||||
try:
|
||||
entry[key] = int(entry[key])
|
||||
except (ValueError, TypeError):
|
||||
entry[key] = None
|
||||
|
||||
for key in float_list:
|
||||
if key in entry:
|
||||
try:
|
||||
entry[key] = float(entry[key])
|
||||
except (ValueError, TypeError):
|
||||
entry[key] = None
|
||||
|
||||
return proc_data
|
||||
|
||||
|
||||
def parse(data, raw=False, quiet=False):
|
||||
"""
|
||||
Main text parsing function
|
||||
|
||||
Parameters:
|
||||
|
||||
data: (string) text data to parse
|
||||
raw: (boolean) output preprocessed JSON if True
|
||||
quiet: (boolean) suppress warning messages if True
|
||||
|
||||
Returns:
|
||||
|
||||
Dictionary. Raw or processed structured data.
|
||||
"""
|
||||
if not quiet:
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
RE_TTL_HOST = re.compile(r'^\s?(?P<ttl>\d+)(?P<ttl_guess>\??):\s+(?P<host>(?:no reply|\S+))') # groups: ttl, ttl_guess, host
|
||||
RE_PMTU = re.compile(r'\spmtu\s(?P<pmtu>[\d]+)') # group: pmtu
|
||||
RE_REPLY_MS = re.compile(r'\s(?P<reply_ms>\d*\.\d*)ms') # group: reply_ms
|
||||
RE_ASYMM = re.compile(r'\sasymm\s+(?P<asymm>[\d]+)') # group: asymm
|
||||
RE_REACHED = re.compile(r'\sreached')
|
||||
RE_SUMMARY = re.compile(r'\s+Resume:\s+pmtu\s+(?P<pmtu>\d+)(?:\s+hops\s+(?P<hops>\d+))?(?:\s+back\s+(?P<back>\d+))?') # groups: pmtu, hops, back
|
||||
|
||||
raw_output = {}
|
||||
|
||||
if jc.utils.has_data(data):
|
||||
hops = []
|
||||
|
||||
for line in filter(None, data.splitlines()):
|
||||
# grab hop information
|
||||
ttl_host = re.search(RE_TTL_HOST, line)
|
||||
pmtu = re.search(RE_PMTU, line)
|
||||
reply_ms = re.search(RE_REPLY_MS, line)
|
||||
asymm = re.search(RE_ASYMM, line)
|
||||
reached = re.search(RE_REACHED, line)
|
||||
summary = re.search(RE_SUMMARY, line)
|
||||
|
||||
if ttl_host:
|
||||
hop = {
|
||||
'ttl': ttl_host.group('ttl'),
|
||||
'guess': bool(ttl_host.group('ttl_guess')),
|
||||
'host': ttl_host.group('host') if ttl_host.group('host') != 'no reply' else None,
|
||||
'reply_ms': reply_ms.group('reply_ms') if reply_ms else None,
|
||||
'pmtu': pmtu.group('pmtu') if pmtu else None,
|
||||
'asymmetric_difference': asymm.group('asymm') if asymm else None,
|
||||
'reached': bool(reached)
|
||||
}
|
||||
|
||||
hops.append(hop)
|
||||
continue
|
||||
|
||||
elif summary:
|
||||
raw_output = {
|
||||
'pmtu': summary.group('pmtu') if summary.group('pmtu') else None,
|
||||
'forward_hops': summary.group('hops') if summary.group('hops') else None,
|
||||
'return_hops': summary.group('back') if summary.group('back') else None,
|
||||
'hops': hops
|
||||
}
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
else:
|
||||
return process(raw_output)
|
||||
422
jc/parsers/traceroute.py
Normal file
422
jc/parsers/traceroute.py
Normal file
@@ -0,0 +1,422 @@
|
||||
"""jc - JSON CLI output utility traceroute Parser
|
||||
|
||||
Usage:
|
||||
|
||||
specify --traceroute as the first argument if the piped input is coming from traceroute
|
||||
|
||||
Note: on OSX and FreeBSD be sure to redirect STDERR to STDOUT since the header line is sent to STDERR
|
||||
e.g. $ traceroute 8.8.8.8 2>&1 | jc --traceroute
|
||||
|
||||
Compatibility:
|
||||
|
||||
'linux', 'darwin', 'freebsd'
|
||||
|
||||
Examples:
|
||||
|
||||
$ traceroute google.com | jc --traceroute -p
|
||||
{
|
||||
"destination_ip": "216.58.194.46",
|
||||
"destination_name": "google.com",
|
||||
"hops": [
|
||||
{
|
||||
"hop": 1,
|
||||
"probes": [
|
||||
{
|
||||
"annotation": null,
|
||||
"asn": null,
|
||||
"ip": "216.230.231.141",
|
||||
"name": "216-230-231-141.static.houston.tx.oplink.net",
|
||||
"rtt": 198.574
|
||||
},
|
||||
{
|
||||
"annotation": null,
|
||||
"asn": null,
|
||||
"ip": "216.230.231.141",
|
||||
"name": "216-230-231-141.static.houston.tx.oplink.net",
|
||||
"rtt": null
|
||||
},
|
||||
{
|
||||
"annotation": null,
|
||||
"asn": null,
|
||||
"ip": "216.230.231.141",
|
||||
"name": "216-230-231-141.static.houston.tx.oplink.net",
|
||||
"rtt": 198.65
|
||||
}
|
||||
]
|
||||
},
|
||||
...
|
||||
]
|
||||
}
|
||||
|
||||
$ traceroute google.com | jc --traceroute -p -r
|
||||
{
|
||||
"destination_ip": "216.58.194.46",
|
||||
"destination_name": "google.com",
|
||||
"hops": [
|
||||
{
|
||||
"hop": "1",
|
||||
"probes": [
|
||||
{
|
||||
"annotation": null,
|
||||
"asn": null,
|
||||
"ip": "216.230.231.141",
|
||||
"name": "216-230-231-141.static.houston.tx.oplink.net",
|
||||
"rtt": "198.574"
|
||||
},
|
||||
{
|
||||
"annotation": null,
|
||||
"asn": null,
|
||||
"ip": "216.230.231.141",
|
||||
"name": "216-230-231-141.static.houston.tx.oplink.net",
|
||||
"rtt": null
|
||||
},
|
||||
{
|
||||
"annotation": null,
|
||||
"asn": null,
|
||||
"ip": "216.230.231.141",
|
||||
"name": "216-230-231-141.static.houston.tx.oplink.net",
|
||||
"rtt": "198.650"
|
||||
}
|
||||
]
|
||||
},
|
||||
...
|
||||
]
|
||||
}
|
||||
"""
|
||||
import re
|
||||
from decimal import Decimal
|
||||
import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.0'
|
||||
description = 'traceroute command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
details = 'Using the trparse library by Luis Benitez at https://github.com/lbenitez000/trparse'
|
||||
|
||||
# compatible options: linux, darwin, cygwin, win32, aix, freebsd
|
||||
compatible = ['linux', 'darwin', 'freebsd']
|
||||
magic_commands = ['traceroute', 'traceroute6']
|
||||
|
||||
|
||||
__version__ = info.version
|
||||
|
||||
|
||||
'''
|
||||
Copyright (C) 2015 Luis Benitez
|
||||
|
||||
Parses the output of a traceroute execution into an AST (Abstract Syntax Tree).
|
||||
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2014 Luis Benitez
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
'''
|
||||
|
||||
RE_HEADER = re.compile(r'(\S+)\s+\((\d+\.\d+\.\d+\.\d+|[0-9a-fA-F:]+)\)')
|
||||
RE_PROBE_NAME_IP = re.compile(r'(\S+)\s+\((\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}|[0-9a-fA-F:]+)\)+')
|
||||
RE_PROBE_BSD_IPV6 = re.compile(r'\b(?:[A-Fa-f0-9]{1,4}:){7}[A-Fa-f0-9]{1,4}\b')
|
||||
RE_HOP = re.compile(r'^\s*(\d+)?\s+(.+)$')
|
||||
RE_PROBE_ASN = re.compile(r'\[AS(\d+)\]')
|
||||
RE_PROBE_RTT_ANNOTATION = re.compile(r'(\d+\.?\d+)?\s+ms|(\s+\*\s+)\s*(!\S*)?')
|
||||
|
||||
|
||||
class Traceroute(object):
|
||||
def __init__(self, dest_name, dest_ip):
|
||||
self.dest_name = dest_name
|
||||
self.dest_ip = dest_ip
|
||||
self.hops = []
|
||||
|
||||
def add_hop(self, hop):
|
||||
self.hops.append(hop)
|
||||
|
||||
def __str__(self):
|
||||
text = "Traceroute for %s (%s)\n\n" % (self.dest_name, self.dest_ip)
|
||||
for hop in self.hops:
|
||||
text += str(hop)
|
||||
return text
|
||||
|
||||
|
||||
class Hop(object):
|
||||
def __init__(self, idx):
|
||||
self.idx = idx # Hop count, starting at 1 (usually)
|
||||
self.probes = [] # Series of Probe instances
|
||||
|
||||
def add_probe(self, probe):
|
||||
"""Adds a Probe instance to this hop's results."""
|
||||
if self.probes:
|
||||
probe_last = self.probes[-1]
|
||||
if not probe.ip:
|
||||
probe.ip = probe_last.ip
|
||||
probe.name = probe_last.name
|
||||
self.probes.append(probe)
|
||||
|
||||
def __str__(self):
|
||||
text = "{:>3d} ".format(self.idx)
|
||||
text_len = len(text)
|
||||
for n, probe in enumerate(self.probes):
|
||||
text_probe = str(probe)
|
||||
if n:
|
||||
text += (text_len * " ") + text_probe
|
||||
else:
|
||||
text += text_probe
|
||||
text += "\n"
|
||||
return text
|
||||
|
||||
|
||||
class Probe(object):
|
||||
def __init__(self, name=None, ip=None, asn=None, rtt=None, annotation=None):
|
||||
self.name = name
|
||||
self.ip = ip
|
||||
self.asn = asn # Autonomous System number
|
||||
self.rtt = rtt # RTT in ms
|
||||
self.annotation = annotation # Annotation, such as !H, !N, !X, etc
|
||||
|
||||
def __str__(self):
|
||||
text = ""
|
||||
if self.asn is not None:
|
||||
text += "[AS{:d}] ".format(self.asn)
|
||||
if self.rtt:
|
||||
text += "{:s} ({:s}) {:1.3f} ms".format(self.name, self.ip, self.rtt)
|
||||
else:
|
||||
text = "*"
|
||||
if self.annotation:
|
||||
text += " {:s}".format(self.annotation)
|
||||
text += "\n"
|
||||
return text
|
||||
|
||||
|
||||
def loads(data):
|
||||
lines = data.splitlines()
|
||||
|
||||
# Get headers
|
||||
match_dest = RE_HEADER.search(lines[0])
|
||||
dest_name = match_dest.group(1)
|
||||
dest_ip = match_dest.group(2)
|
||||
|
||||
# The Traceroute node is the root of the tree
|
||||
traceroute = Traceroute(dest_name, dest_ip)
|
||||
|
||||
# Parse the remaining lines, they should be only hops/probes
|
||||
for line in lines[1:]:
|
||||
# Skip empty lines
|
||||
if not line:
|
||||
continue
|
||||
|
||||
hop_match = RE_HOP.match(line)
|
||||
|
||||
if hop_match.group(1):
|
||||
hop_index = int(hop_match.group(1))
|
||||
else:
|
||||
hop_index = None
|
||||
|
||||
if hop_index is not None:
|
||||
hop = Hop(hop_index)
|
||||
traceroute.add_hop(hop)
|
||||
|
||||
hop_string = hop_match.group(2)
|
||||
|
||||
probe_asn_match = RE_PROBE_ASN.search(hop_string)
|
||||
if probe_asn_match:
|
||||
probe_asn = int(probe_asn_match.group(1))
|
||||
else:
|
||||
probe_asn = None
|
||||
|
||||
probe_name_ip_match = RE_PROBE_NAME_IP.search(hop_string)
|
||||
probe_bsd_ipv6_match = RE_PROBE_BSD_IPV6.search(hop_string)
|
||||
if probe_name_ip_match:
|
||||
probe_name = probe_name_ip_match.group(1)
|
||||
probe_ip = probe_name_ip_match.group(2)
|
||||
elif probe_bsd_ipv6_match:
|
||||
probe_name = None
|
||||
probe_ip = probe_bsd_ipv6_match.group(0)
|
||||
else:
|
||||
probe_name = None
|
||||
probe_ip = None
|
||||
|
||||
probe_rtt_annotations = RE_PROBE_RTT_ANNOTATION.findall(hop_string)
|
||||
|
||||
for probe_rtt_annotation in probe_rtt_annotations:
|
||||
if probe_rtt_annotation[0]:
|
||||
probe_rtt = Decimal(probe_rtt_annotation[0])
|
||||
elif probe_rtt_annotation[1]:
|
||||
probe_rtt = None
|
||||
else:
|
||||
message = f"Expected probe RTT or *. Got: '{probe_rtt_annotation[0]}'"
|
||||
raise ParseError(message)
|
||||
|
||||
probe_annotation = probe_rtt_annotation[2] or None
|
||||
|
||||
probe = Probe(
|
||||
name=probe_name,
|
||||
ip=probe_ip,
|
||||
asn=probe_asn,
|
||||
rtt=probe_rtt,
|
||||
annotation=probe_annotation
|
||||
)
|
||||
hop.add_probe(probe)
|
||||
|
||||
return traceroute
|
||||
|
||||
|
||||
class ParseError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
########################################################################################
|
||||
|
||||
def process(proc_data):
|
||||
"""
|
||||
Final processing to conform to the schema.
|
||||
|
||||
Parameters:
|
||||
|
||||
proc_data: (dictionary) raw structured data to process
|
||||
|
||||
Returns:
|
||||
|
||||
Dictionary. Structured data with the following schema:
|
||||
|
||||
{
|
||||
"destination_ip": string,
|
||||
"destination_name": string,
|
||||
"hops": [
|
||||
{
|
||||
"hop": integer,
|
||||
"probes": [
|
||||
{
|
||||
"annotation": string,
|
||||
"asn": integer,
|
||||
"ip": string,
|
||||
"name": string,
|
||||
"rtt": float
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
"""
|
||||
int_list = ['hop', 'asn']
|
||||
float_list = ['rtt']
|
||||
|
||||
if 'hops' in proc_data:
|
||||
for entry in proc_data['hops']:
|
||||
for key in int_list:
|
||||
if key in entry:
|
||||
try:
|
||||
entry[key] = int(entry[key])
|
||||
except (ValueError, TypeError):
|
||||
entry[key] = None
|
||||
|
||||
for key in float_list:
|
||||
if key in entry:
|
||||
try:
|
||||
entry[key] = float(entry[key])
|
||||
except (ValueError, TypeError):
|
||||
entry[key] = None
|
||||
|
||||
if 'probes' in entry:
|
||||
for item in entry['probes']:
|
||||
for key in int_list:
|
||||
if key in item:
|
||||
try:
|
||||
item[key] = int(item[key])
|
||||
except (ValueError, TypeError):
|
||||
item[key] = None
|
||||
|
||||
for key in float_list:
|
||||
if key in item:
|
||||
try:
|
||||
item[key] = float(item[key])
|
||||
except (ValueError, TypeError):
|
||||
item[key] = None
|
||||
|
||||
return proc_data
|
||||
|
||||
|
||||
def parse(data, raw=False, quiet=False):
|
||||
"""
|
||||
Main text parsing function
|
||||
|
||||
Parameters:
|
||||
|
||||
data: (string) text data to parse
|
||||
raw: (boolean) output preprocessed JSON if True
|
||||
quiet: (boolean) suppress warning messages if True
|
||||
|
||||
Returns:
|
||||
|
||||
Dictionary. Raw or processed structured data.
|
||||
"""
|
||||
if not quiet:
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
raw_output = {}
|
||||
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
# remove any warning lines
|
||||
new_data = []
|
||||
for data_line in data.splitlines():
|
||||
if 'traceroute: Warning: ' not in data_line and 'traceroute6: Warning: ' not in data_line:
|
||||
new_data.append(data_line)
|
||||
else:
|
||||
continue
|
||||
data = '\n'.join(new_data)
|
||||
|
||||
# check if header row exists, otherwise raise exception
|
||||
if not data.splitlines()[0].startswith('traceroute to ') and not data.splitlines()[0].startswith('traceroute6 to '):
|
||||
raise ParseError('Traceroute header line not found. Be sure to redirect STDERR to STDOUT on some operating systems.')
|
||||
|
||||
tr = loads(data)
|
||||
hops = tr.hops
|
||||
hops_list = []
|
||||
|
||||
if hops:
|
||||
for hop in hops:
|
||||
hop_obj = {}
|
||||
hop_obj['hop'] = str(hop.idx)
|
||||
probe_list = []
|
||||
|
||||
if hop.probes:
|
||||
for probe in hop.probes:
|
||||
probe_obj = {
|
||||
'annotation': probe.annotation,
|
||||
'asn': None if probe.asn is None else str(probe.asn),
|
||||
'ip': probe.ip,
|
||||
'name': probe.name,
|
||||
'rtt': None if probe.rtt is None else str(probe.rtt)
|
||||
}
|
||||
probe_list.append(probe_obj)
|
||||
|
||||
hop_obj['probes'] = probe_list
|
||||
hops_list.append(hop_obj)
|
||||
|
||||
raw_output = {
|
||||
'destination_ip': tr.dest_ip,
|
||||
'destination_name': tr.dest_name,
|
||||
'hops': hops_list
|
||||
}
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
else:
|
||||
return process(raw_output)
|
||||
@@ -30,7 +30,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.2'
|
||||
version = '1.4'
|
||||
description = 'uname -a command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -43,6 +43,10 @@ class info():
|
||||
__version__ = info.version
|
||||
|
||||
|
||||
class ParseError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def process(proc_data):
|
||||
"""
|
||||
Final processing to conform to the schema.
|
||||
@@ -88,12 +92,16 @@ def parse(data, raw=False, quiet=False):
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
raw_output = {}
|
||||
split_line = data.split()
|
||||
|
||||
if len(split_line) > 1:
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
# check for OSX output
|
||||
if data.startswith('Darwin'):
|
||||
parsed_line = data.split()
|
||||
|
||||
if len(parsed_line) < 5:
|
||||
raise ParseError('Could not parse uname output. Make sure to use "uname -a".')
|
||||
|
||||
raw_output['machine'] = parsed_line.pop(-1)
|
||||
raw_output['kernel_name'] = parsed_line.pop(0)
|
||||
raw_output['node_name'] = parsed_line.pop(0)
|
||||
@@ -103,6 +111,10 @@ def parse(data, raw=False, quiet=False):
|
||||
# otherwise use linux parser
|
||||
else:
|
||||
parsed_line = data.split(maxsplit=3)
|
||||
|
||||
if len(parsed_line) < 3:
|
||||
raise ParseError('Could not parse uname output. Make sure to use "uname -a".')
|
||||
|
||||
raw_output['kernel_name'] = parsed_line.pop(0)
|
||||
raw_output['node_name'] = parsed_line.pop(0)
|
||||
raw_output['kernel_release'] = parsed_line.pop(0)
|
||||
|
||||
@@ -34,7 +34,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.0'
|
||||
version = '1.2'
|
||||
description = 'uptime command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -107,10 +107,10 @@ def parse(data, raw=False, quiet=False):
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
raw_output = {}
|
||||
|
||||
cleandata = data.splitlines()
|
||||
|
||||
if cleandata:
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
parsed_line = cleandata[0].split()
|
||||
|
||||
# allow space for odd times
|
||||
|
||||
@@ -83,7 +83,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.1'
|
||||
version = '1.3'
|
||||
description = 'w command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -149,36 +149,40 @@ def parse(data, raw=False, quiet=False):
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
cleandata = data.splitlines()[1:]
|
||||
header_text = cleandata[0].lower()
|
||||
# fixup for 'from' column that can be blank
|
||||
from_col = header_text.find('from')
|
||||
# clean up 'login@' header
|
||||
# even though @ in a key is valid json, it can make things difficult
|
||||
header_text = header_text.replace('login@', 'login_at')
|
||||
headers = [h for h in ' '.join(header_text.strip().split()).split() if h]
|
||||
|
||||
# parse lines
|
||||
raw_output = []
|
||||
if cleandata:
|
||||
for entry in cleandata[1:]:
|
||||
output_line = {}
|
||||
|
||||
# normalize data by inserting Null for missing data
|
||||
temp_line = entry.split(maxsplit=len(headers) - 1)
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
# fix from column, always at column 2
|
||||
if 'from' in headers:
|
||||
if entry[from_col] in string.whitespace:
|
||||
temp_line.insert(2, '-')
|
||||
header_text = cleandata[0].lower()
|
||||
# fixup for 'from' column that can be blank
|
||||
from_col = header_text.find('from')
|
||||
# clean up 'login@' header
|
||||
# even though @ in a key is valid json, it can make things difficult
|
||||
header_text = header_text.replace('login@', 'login_at')
|
||||
headers = [h for h in ' '.join(header_text.strip().split()).split() if h]
|
||||
|
||||
output_line = dict(zip(headers, temp_line))
|
||||
raw_output.append(output_line)
|
||||
# parse lines
|
||||
raw_output = []
|
||||
if cleandata:
|
||||
for entry in cleandata[1:]:
|
||||
output_line = {}
|
||||
|
||||
# strip whitespace from beginning and end of all string values
|
||||
for row in raw_output:
|
||||
for item in row:
|
||||
if isinstance(row[item], str):
|
||||
row[item] = row[item].strip()
|
||||
# normalize data by inserting Null for missing data
|
||||
temp_line = entry.split(maxsplit=len(headers) - 1)
|
||||
|
||||
# fix from column, always at column 2
|
||||
if 'from' in headers:
|
||||
if entry[from_col] in string.whitespace:
|
||||
temp_line.insert(2, '-')
|
||||
|
||||
output_line = dict(zip(headers, temp_line))
|
||||
raw_output.append(output_line)
|
||||
|
||||
# strip whitespace from beginning and end of all string values
|
||||
for row in raw_output:
|
||||
for item in row:
|
||||
if isinstance(row[item], str):
|
||||
row[item] = row[item].strip()
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
@@ -103,7 +103,7 @@ import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.0'
|
||||
version = '1.1'
|
||||
description = 'who command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -174,12 +174,12 @@ def parse(data, raw=False, quiet=False):
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
raw_output = []
|
||||
cleandata = data.splitlines()
|
||||
|
||||
# Clear any blank lines
|
||||
cleandata = list(filter(None, cleandata))
|
||||
cleandata = list(filter(None, data.splitlines()))
|
||||
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
if cleandata:
|
||||
for line in cleandata:
|
||||
output_line = {}
|
||||
linedata = line.split()
|
||||
|
||||
@@ -59,7 +59,7 @@ import xmltodict
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.0'
|
||||
version = '1.2'
|
||||
description = 'XML file parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -111,7 +111,10 @@ def parse(data, raw=False, quiet=False):
|
||||
if not quiet:
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
if data:
|
||||
raw_output = []
|
||||
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
raw_output = xmltodict.parse(data)
|
||||
|
||||
if raw:
|
||||
|
||||
@@ -71,7 +71,7 @@ from ruamel.yaml import YAML
|
||||
|
||||
|
||||
class info():
|
||||
version = '1.0'
|
||||
version = '1.1'
|
||||
description = 'YAML file parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@@ -126,10 +126,13 @@ def parse(data, raw=False, quiet=False):
|
||||
jc.utils.compatibility(__name__, info.compatible)
|
||||
|
||||
raw_output = []
|
||||
yaml = YAML(typ='safe')
|
||||
|
||||
for document in yaml.load_all(data):
|
||||
raw_output.append(document)
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
yaml = YAML(typ='safe')
|
||||
|
||||
for document in yaml.load_all(data):
|
||||
raw_output.append(document)
|
||||
|
||||
if raw:
|
||||
return raw_output
|
||||
|
||||
247
jc/tracebackplus.py
Normal file
247
jc/tracebackplus.py
Normal file
@@ -0,0 +1,247 @@
|
||||
"""More comprehensive traceback formatting for Python scripts.
|
||||
To enable this module, do:
|
||||
import tracebackplus; tracebackplus.enable()
|
||||
at the top of your script. The optional arguments to enable() are:
|
||||
logdir - if set, tracebacks are written to files in this directory
|
||||
context - number of lines of source code to show for each stack frame
|
||||
By default, tracebacks are displayed but not saved and the context is 5 lines.
|
||||
Alternatively, if you have caught an exception and want tracebackplus to display it
|
||||
for you, call tracebackplus.handler(). The optional argument to handler() is a
|
||||
3-item tuple (etype, evalue, etb) just like the value of sys.exc_info().
|
||||
"""
|
||||
|
||||
'''
|
||||
tracebackplus was derived from the cgitb standard library module. As cgitb is being
|
||||
deprecated, this simplified version of cgitb was created.
|
||||
|
||||
https://github.com/python/cpython/blob/3.8/Lib/cgitb.py
|
||||
|
||||
"Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
|
||||
2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020 Python Software Foundation;
|
||||
All Rights Reserved"
|
||||
|
||||
PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
|
||||
--------------------------------------------
|
||||
|
||||
1. This LICENSE AGREEMENT is between the Python Software Foundation
|
||||
("PSF"), and the Individual or Organization ("Licensee") accessing and
|
||||
otherwise using this software ("Python") in source or binary form and
|
||||
its associated documentation.
|
||||
|
||||
2. Subject to the terms and conditions of this License Agreement, PSF hereby
|
||||
grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
|
||||
analyze, test, perform and/or display publicly, prepare derivative works,
|
||||
distribute, and otherwise use Python alone or in any derivative version,
|
||||
provided, however, that PSF's License Agreement and PSF's notice of copyright,
|
||||
i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
|
||||
2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020 Python Software Foundation;
|
||||
All Rights Reserved" are retained in Python alone or in any derivative version
|
||||
prepared by Licensee.
|
||||
|
||||
3. In the event Licensee prepares a derivative work that is based on
|
||||
or incorporates Python or any part thereof, and wants to make
|
||||
the derivative work available to others as provided herein, then
|
||||
Licensee hereby agrees to include in any such work a brief summary of
|
||||
the changes made to Python.
|
||||
|
||||
4. PSF is making Python available to Licensee on an "AS IS"
|
||||
basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
|
||||
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
|
||||
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
|
||||
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
|
||||
INFRINGE ANY THIRD PARTY RIGHTS.
|
||||
|
||||
5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
|
||||
FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
|
||||
A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
|
||||
OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
|
||||
|
||||
6. This License Agreement will automatically terminate upon a material
|
||||
breach of its terms and conditions.
|
||||
|
||||
7. Nothing in this License Agreement shall be deemed to create any
|
||||
relationship of agency, partnership, or joint venture between PSF and
|
||||
Licensee. This License Agreement does not grant permission to use PSF
|
||||
trademarks or trade name in a trademark sense to endorse or promote
|
||||
products or services of Licensee, or any third party.
|
||||
|
||||
8. By copying, installing or otherwise using Python, Licensee
|
||||
agrees to be bound by the terms and conditions of this License
|
||||
Agreement.
|
||||
'''
|
||||
|
||||
import inspect
|
||||
import keyword
|
||||
import linecache
|
||||
import os
|
||||
import pydoc
|
||||
import sys
|
||||
import tempfile
|
||||
import time
|
||||
import tokenize
|
||||
import traceback
|
||||
|
||||
|
||||
__UNDEF__ = [] # a special sentinel object
|
||||
|
||||
|
||||
def lookup(name, frame, locals):
|
||||
"""Find the value for a given name in the given environment."""
|
||||
if name in locals:
|
||||
return 'local', locals[name]
|
||||
if name in frame.f_globals:
|
||||
return 'global', frame.f_globals[name]
|
||||
if '__builtins__' in frame.f_globals:
|
||||
builtins = frame.f_globals['__builtins__']
|
||||
if isinstance(builtins, dict):
|
||||
if name in builtins:
|
||||
return 'builtin', builtins[name]
|
||||
else:
|
||||
if hasattr(builtins, name):
|
||||
return 'builtin', getattr(builtins, name)
|
||||
return None, __UNDEF__
|
||||
|
||||
|
||||
def scanvars(reader, frame, locals):
|
||||
"""Scan one logical line of Python and look up values of variables used."""
|
||||
vars, lasttoken, parent, prefix, value = [], None, None, '', __UNDEF__
|
||||
for ttype, token, start, end, line in tokenize.generate_tokens(reader):
|
||||
if ttype == tokenize.NEWLINE:
|
||||
break
|
||||
if ttype == tokenize.NAME and token not in keyword.kwlist:
|
||||
if lasttoken == '.':
|
||||
if parent is not __UNDEF__:
|
||||
value = getattr(parent, token, __UNDEF__)
|
||||
vars.append((prefix + token, prefix, value))
|
||||
else:
|
||||
where, value = lookup(token, frame, locals)
|
||||
vars.append((token, where, value))
|
||||
elif token == '.':
|
||||
prefix += lasttoken + '.'
|
||||
parent = value
|
||||
else:
|
||||
parent, prefix = None, ''
|
||||
lasttoken = token
|
||||
return vars
|
||||
|
||||
|
||||
def text(einfo, context=5):
|
||||
"""Return a plain text document describing a given traceback."""
|
||||
etype, evalue, etb = einfo
|
||||
if isinstance(etype, type):
|
||||
etype = etype.__name__
|
||||
pyver = 'Python ' + sys.version.split()[0] + ': ' + sys.executable
|
||||
date = time.ctime(time.time())
|
||||
head = '%s\n%s\n%s\n' % (str(etype), pyver, date) + '''
|
||||
A problem occurred in a Python script. Here is the sequence of
|
||||
function calls leading up to the error, in the order they occurred.
|
||||
'''
|
||||
|
||||
frames = []
|
||||
records = inspect.getinnerframes(etb, context)
|
||||
for frame, file, lnum, func, lines, index in records:
|
||||
file = file and os.path.abspath(file) or '?'
|
||||
args, varargs, varkw, locals = inspect.getargvalues(frame)
|
||||
call = ''
|
||||
if func != '?':
|
||||
call = 'in ' + func + \
|
||||
inspect.formatargvalues(args, varargs, varkw, locals,
|
||||
formatvalue=lambda value: '=' + pydoc.text.repr(value))
|
||||
|
||||
highlight = {}
|
||||
|
||||
def reader(lnum=[lnum]):
|
||||
highlight[lnum[0]] = 1
|
||||
try:
|
||||
return linecache.getline(file, lnum[0])
|
||||
finally:
|
||||
lnum[0] += 1
|
||||
vars = scanvars(reader, frame, locals)
|
||||
|
||||
rows = [' %s %s' % (file, call)]
|
||||
if index is not None:
|
||||
i = lnum - index
|
||||
for line in lines:
|
||||
num = '%5d ' % i
|
||||
rows.append(num + line.rstrip())
|
||||
i += 1
|
||||
|
||||
done, dump = {}, []
|
||||
for name, where, value in vars:
|
||||
if name in done:
|
||||
continue
|
||||
done[name] = 1
|
||||
if value is not __UNDEF__:
|
||||
if where == 'global':
|
||||
name = 'global ' + name
|
||||
elif where != 'local':
|
||||
name = where + name.split('.')[-1]
|
||||
dump.append('%s = %s' % (name, pydoc.text.repr(value)))
|
||||
else:
|
||||
dump.append(name + ' undefined')
|
||||
|
||||
rows.append('\n'.join(dump))
|
||||
frames.append('\n%s\n' % '\n'.join(rows))
|
||||
|
||||
exception = ['%s: %s' % (str(etype), str(evalue))]
|
||||
for name in dir(evalue):
|
||||
value = pydoc.text.repr(getattr(evalue, name))
|
||||
exception.append('\n%s%s = %s' % (' ' * 4, name, value))
|
||||
|
||||
return head + ''.join(frames) + ''.join(exception) + '''
|
||||
|
||||
The above is a description of an error in a Python program. Here is
|
||||
the original traceback:
|
||||
|
||||
%s
|
||||
''' % ''.join(traceback.format_exception(etype, evalue, etb))
|
||||
|
||||
|
||||
class Hook:
|
||||
"""A hook to replace sys.excepthook"""
|
||||
|
||||
def __init__(self, logdir=None, context=5, file=None):
|
||||
self.logdir = logdir # log tracebacks to files if not None
|
||||
self.context = context # number of source code lines per frame
|
||||
self.file = file or sys.stdout # place to send the output
|
||||
|
||||
def __call__(self, etype, evalue, etb):
|
||||
self.handle((etype, evalue, etb))
|
||||
|
||||
def handle(self, info=None):
|
||||
info = info or sys.exc_info()
|
||||
|
||||
formatter = text
|
||||
|
||||
try:
|
||||
doc = formatter(info, self.context)
|
||||
except: # just in case something goes wrong
|
||||
doc = ''.join(traceback.format_exception(*info))
|
||||
|
||||
self.file.write(doc + '\n')
|
||||
|
||||
if self.logdir is not None:
|
||||
suffix = '.txt'
|
||||
(fd, path) = tempfile.mkstemp(suffix=suffix, dir=self.logdir)
|
||||
|
||||
try:
|
||||
with os.fdopen(fd, 'w') as file:
|
||||
file.write(doc)
|
||||
msg = '%s contains the description of this error.' % path
|
||||
except:
|
||||
msg = 'Tried to save traceback to %s, but failed.' % path
|
||||
|
||||
self.file.write(msg + '\n')
|
||||
|
||||
try:
|
||||
self.file.flush()
|
||||
except:
|
||||
pass
|
||||
|
||||
|
||||
handler = Hook().handle
|
||||
|
||||
|
||||
def enable(logdir=None, context=5):
|
||||
"""Install an exception handler that sends verbose tracebacks to STDOUT."""
|
||||
sys.excepthook = Hook(logdir=logdir, context=context)
|
||||
18
jc/utils.py
18
jc/utils.py
@@ -66,4 +66,20 @@ def compatibility(mod_name, compatible):
|
||||
if not platform_found:
|
||||
mod = mod_name.split('.')[-1]
|
||||
compat_list = ', '.join(compatible)
|
||||
warning_message(f'{mod} parser not compatible with your OS ({sys.platform}).\n Compatible platforms: {compat_list}')
|
||||
warning_message(f'{mod} parser not compatible with your OS ({sys.platform}).\n'
|
||||
f' Compatible platforms: {compat_list}')
|
||||
|
||||
|
||||
def has_data(data):
|
||||
"""
|
||||
Checks if the input contains data. If there are any non-whitespace characters then return True, else return False
|
||||
|
||||
Parameters:
|
||||
|
||||
data: (string) input to check whether it contains data
|
||||
|
||||
Returns:
|
||||
|
||||
Boolean True if input string (data) contains non-whitespace characters, otherwise False
|
||||
"""
|
||||
return True if data and not data.isspace() else False
|
||||
|
||||
273
man/jc.1
Normal file
273
man/jc.1
Normal file
@@ -0,0 +1,273 @@
|
||||
.TH jc 1 2020-07-12 1.13.0 "JSON CLI output utility"
|
||||
.SH NAME
|
||||
jc \- JSONifies the output of many CLI tools and file-types
|
||||
.SH SYNOPSIS
|
||||
COMMAND | jc PARSER [OPTIONS]
|
||||
|
||||
or magic syntax:
|
||||
|
||||
jc [OPTIONS] COMMAND
|
||||
.SH DESCRIPTION
|
||||
jc JSONifies the output of many CLI tools and file-types for easier parsing in scripts. jc accepts piped input from STDIN and outputs a JSON representation of the previous command's output to STDOUT. Alternatively, the "magic" syntax can be used by prepending jc to the command to be converted. Options can be passed to jc immediately before the command is given. (Note: command aliases are not supported).
|
||||
|
||||
.SH OPTIONS
|
||||
.B
|
||||
Parsers:
|
||||
.RS
|
||||
.TP
|
||||
.B
|
||||
\fB--airport\fP
|
||||
airport \fB-I\fP command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--airport-s\fP
|
||||
airport \fB-s\fP command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--arp\fP
|
||||
arp command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--blkid\fP
|
||||
blkid command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--crontab\fP
|
||||
crontab command and file parser
|
||||
.TP
|
||||
.B
|
||||
\fB--crontab-u\fP
|
||||
crontab file parser with user support
|
||||
.TP
|
||||
.B
|
||||
\fB--csv\fP
|
||||
CSV file parser
|
||||
.TP
|
||||
.B
|
||||
\fB--df\fP
|
||||
df command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--dig\fP
|
||||
dig command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--dmidecode\fP
|
||||
dmidecode command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--du\fP
|
||||
du command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--env\fP
|
||||
env command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--file\fP
|
||||
file command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--free\fP
|
||||
free command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--fstab\fP
|
||||
fstab file parser
|
||||
.TP
|
||||
.B
|
||||
\fB--group\fP
|
||||
/etc/group file parser
|
||||
.TP
|
||||
.B
|
||||
\fB--gshadow\fP
|
||||
/etc/gshadow file parser
|
||||
.TP
|
||||
.B
|
||||
\fB--history\fP
|
||||
history command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--hosts\fP
|
||||
/etc/hosts file parser
|
||||
.TP
|
||||
.B
|
||||
\fB--id\fP
|
||||
id command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--ifconfig\fP
|
||||
ifconfig command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--ini\fP
|
||||
INI file parser. Also parses files/output containing simple key/value pairs
|
||||
.TP
|
||||
.B
|
||||
\fB--iptables\fP
|
||||
iptables command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--jobs\fP
|
||||
jobs command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--last\fP
|
||||
last and lastb command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--ls\fP
|
||||
ls command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--lsblk\fP
|
||||
lsblk command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--lsmod\fP
|
||||
lsmod command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--lsof\fP
|
||||
lsof command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--mount\fP
|
||||
mount command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--netstat\fP
|
||||
netstat command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--ntpq\fP
|
||||
ntpq \fB-p\fP command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--passwd\fP
|
||||
/etc/passwd file parser
|
||||
.TP
|
||||
.B
|
||||
\fB--ping\fP
|
||||
ping command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--pip-list\fP
|
||||
pip list command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--pip-show\fP
|
||||
pip show command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--ps\fP
|
||||
ps command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--route\fP
|
||||
route command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--shadow\fP
|
||||
/etc/shadow file parser
|
||||
.TP
|
||||
.B
|
||||
\fB--ss\fP
|
||||
ss command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--stat\fP
|
||||
stat command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--sysctl\fP
|
||||
sysctl command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--systemctl\fP
|
||||
systemctl command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--systemctl-lj\fP
|
||||
systemctl list-jobs command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--systemctl-ls\fP
|
||||
systemctl list-sockets command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--systemctl-luf\fP
|
||||
systemctl list-unit-files command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--timedatectl\fP
|
||||
timedatectl status command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--tracepath\fP
|
||||
tracepath command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--traceroute\fP
|
||||
traceroute command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--uname\fP
|
||||
uname \fB-a\fP command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--uptime\fP
|
||||
uptime command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--w\fP
|
||||
w command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--who\fP
|
||||
who command parser
|
||||
.TP
|
||||
.B
|
||||
\fB--xml\fP
|
||||
XML file parser
|
||||
.TP
|
||||
.B
|
||||
\fB--yaml\fP
|
||||
YAML file parser
|
||||
.RE
|
||||
.PP
|
||||
Options:
|
||||
.RS
|
||||
.TP
|
||||
.B
|
||||
\fB-a\fP
|
||||
about jc
|
||||
.TP
|
||||
.B
|
||||
\fB-d\fP
|
||||
debug - show traceback (\fB-dd\fP for verbose traceback)
|
||||
.TP
|
||||
.B
|
||||
\fB-m\fP
|
||||
monochrome output
|
||||
.TP
|
||||
.B
|
||||
\fB-p\fP
|
||||
pretty print output
|
||||
.TP
|
||||
.B
|
||||
\fB-q\fP
|
||||
quiet - suppress warnings
|
||||
.TP
|
||||
.B
|
||||
\fB-r\fP
|
||||
raw JSON output
|
||||
.RE
|
||||
.PP
|
||||
Example:
|
||||
ls \fB-al\fP | jc \fB--ls\fP \fB-p\fP
|
||||
.RS
|
||||
.PP
|
||||
or using the magic syntax:
|
||||
.PP
|
||||
jc \fB-p\fP ls \fB-al\fP
|
||||
@@ -1,3 +1,3 @@
|
||||
ruamel.yaml>=0.15.0
|
||||
xmltodict>=0.12.0
|
||||
Pygments>=2.4.2
|
||||
Pygments>=2.3.0
|
||||
|
||||
4
setup.py
4
setup.py
@@ -5,14 +5,14 @@ with open('README.md', 'r') as f:
|
||||
|
||||
setuptools.setup(
|
||||
name='jc',
|
||||
version='1.11.3',
|
||||
version='1.13.0',
|
||||
author='Kelly Brazil',
|
||||
author_email='kellyjonbrazil@gmail.com',
|
||||
description='Converts the output of popular command-line tools and file-types to JSON.',
|
||||
install_requires=[
|
||||
'ruamel.yaml>=0.15.0',
|
||||
'xmltodict>=0.12.0',
|
||||
'Pygments>=2.4.2'
|
||||
'Pygments>=2.3.0'
|
||||
],
|
||||
license='MIT',
|
||||
long_description=long_description,
|
||||
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@@ -1 +1 @@
|
||||
[{"chain": "PREROUTING", "rules": [{"target": "PREROUTING_direct", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PREROUTING_ZONES_SOURCE", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PREROUTING_ZONES", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "INPUT", "rules": [{"target": "INPUT_direct", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "FORWARD", "rules": [{"target": "FORWARD_direct", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "OUTPUT", "rules": [{"target": "OUTPUT_direct", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "POSTROUTING", "rules": [{"target": "POSTROUTING_direct", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "FORWARD_direct", "rules": []}, {"chain": "INPUT_direct", "rules": []}, {"chain": "OUTPUT_direct", "rules": []}, {"chain": "POSTROUTING_direct", "rules": []}, {"chain": "PREROUTING_ZONES", "rules": [{"target": "PRE_public", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere", "options": "[goto] "}, {"target": "PRE_public", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere", "options": "[goto] "}]}, {"chain": "PREROUTING_ZONES_SOURCE", "rules": []}, {"chain": "PREROUTING_direct", "rules": []}, {"chain": "PRE_public", "rules": [{"target": "PRE_public_log", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PRE_public_deny", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PRE_public_allow", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "PRE_public_allow", "rules": []}, {"chain": "PRE_public_deny", "rules": []}]
|
||||
[{"chain": "PREROUTING", "rules": [{"target": "PREROUTING_direct", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PREROUTING_ZONES_SOURCE", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PREROUTING_ZONES", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "INPUT", "rules": [{"target": "INPUT_direct", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "FORWARD", "rules": [{"target": "FORWARD_direct", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "OUTPUT", "rules": [{"target": "OUTPUT_direct", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "POSTROUTING", "rules": [{"target": "POSTROUTING_direct", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "FORWARD_direct", "rules": []}, {"chain": "INPUT_direct", "rules": []}, {"chain": "OUTPUT_direct", "rules": []}, {"chain": "POSTROUTING_direct", "rules": []}, {"chain": "PREROUTING_ZONES", "rules": [{"target": "PRE_public", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere", "options": "[goto] "}, {"target": "PRE_public", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere", "options": "[goto] "}]}, {"chain": "PREROUTING_ZONES_SOURCE", "rules": []}, {"chain": "PREROUTING_direct", "rules": []}, {"chain": "PRE_public", "rules": [{"target": "PRE_public_log", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PRE_public_deny", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PRE_public_allow", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "PRE_public_allow", "rules": []}, {"chain": "PRE_public_deny", "rules": []}, {"chain": "PRE_public_log", "rules": []}]
|
||||
|
||||
2
tests/fixtures/centos-7.7/iptables-nat.json
vendored
2
tests/fixtures/centos-7.7/iptables-nat.json
vendored
@@ -1 +1 @@
|
||||
[{"chain": "PREROUTING", "rules": [{"target": "PREROUTING_direct", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PREROUTING_ZONES_SOURCE", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PREROUTING_ZONES", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "DOCKER", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere", "options": "ADDRTYPE match dst-type LOCAL"}]}, {"chain": "INPUT", "rules": []}, {"chain": "OUTPUT", "rules": [{"target": "OUTPUT_direct", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "DOCKER", "prot": "all", "opt": null, "source": "anywhere", "destination": "!loopback/8", "options": "ADDRTYPE match dst-type LOCAL"}]}, {"chain": "POSTROUTING", "rules": [{"target": "MASQUERADE", "prot": "all", "opt": null, "source": "172.17.0.0/16", "destination": "anywhere"}, {"target": "POSTROUTING_direct", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "POSTROUTING_ZONES_SOURCE", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "POSTROUTING_ZONES", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "DOCKER", "rules": [{"target": "RETURN", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "OUTPUT_direct", "rules": []}, {"chain": "POSTROUTING_ZONES", "rules": [{"target": "POST_public", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere", "options": "[goto] "}, {"target": "POST_public", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere", "options": "[goto] "}]}, {"chain": "POSTROUTING_ZONES_SOURCE", "rules": []}, {"chain": "POSTROUTING_direct", "rules": []}, {"chain": "POST_public", "rules": [{"target": "POST_public_log", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "POST_public_deny", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "POST_public_allow", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "POST_public_allow", "rules": []}, {"chain": "POST_public_deny", "rules": []}, {"chain": "POST_public_log", "rules": []}, {"chain": "PREROUTING_ZONES", "rules": [{"target": "PRE_public", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere", "options": "[goto] "}, {"target": "PRE_public", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere", "options": "[goto] "}]}, {"chain": "PREROUTING_ZONES_SOURCE", "rules": []}, {"chain": "PREROUTING_direct", "rules": []}, {"chain": "PRE_public", "rules": [{"target": "PRE_public_log", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PRE_public_deny", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PRE_public_allow", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "PRE_public_allow", "rules": []}, {"chain": "PRE_public_deny", "rules": []}]
|
||||
[{"chain": "PREROUTING", "rules": [{"target": "PREROUTING_direct", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PREROUTING_ZONES_SOURCE", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PREROUTING_ZONES", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "DOCKER", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere", "options": "ADDRTYPE match dst-type LOCAL"}]}, {"chain": "INPUT", "rules": []}, {"chain": "OUTPUT", "rules": [{"target": "OUTPUT_direct", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "DOCKER", "prot": "all", "opt": null, "source": "anywhere", "destination": "!loopback/8", "options": "ADDRTYPE match dst-type LOCAL"}]}, {"chain": "POSTROUTING", "rules": [{"target": "MASQUERADE", "prot": "all", "opt": null, "source": "172.17.0.0/16", "destination": "anywhere"}, {"target": "POSTROUTING_direct", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "POSTROUTING_ZONES_SOURCE", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "POSTROUTING_ZONES", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "DOCKER", "rules": [{"target": "RETURN", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "OUTPUT_direct", "rules": []}, {"chain": "POSTROUTING_ZONES", "rules": [{"target": "POST_public", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere", "options": "[goto] "}, {"target": "POST_public", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere", "options": "[goto] "}]}, {"chain": "POSTROUTING_ZONES_SOURCE", "rules": []}, {"chain": "POSTROUTING_direct", "rules": []}, {"chain": "POST_public", "rules": [{"target": "POST_public_log", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "POST_public_deny", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "POST_public_allow", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "POST_public_allow", "rules": []}, {"chain": "POST_public_deny", "rules": []}, {"chain": "POST_public_log", "rules": []}, {"chain": "PREROUTING_ZONES", "rules": [{"target": "PRE_public", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere", "options": "[goto] "}, {"target": "PRE_public", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere", "options": "[goto] "}]}, {"chain": "PREROUTING_ZONES_SOURCE", "rules": []}, {"chain": "PREROUTING_direct", "rules": []}, {"chain": "PRE_public", "rules": [{"target": "PRE_public_log", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PRE_public_deny", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PRE_public_allow", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "PRE_public_allow", "rules": []}, {"chain": "PRE_public_deny", "rules": []}, {"chain": "PRE_public_log", "rules": []}]
|
||||
|
||||
2
tests/fixtures/centos-7.7/iptables-raw.json
vendored
2
tests/fixtures/centos-7.7/iptables-raw.json
vendored
@@ -1 +1 @@
|
||||
[{"chain": "PREROUTING", "rules": [{"target": "PREROUTING_direct", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PREROUTING_ZONES_SOURCE", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PREROUTING_ZONES", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "OUTPUT", "rules": [{"target": "OUTPUT_direct", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "OUTPUT_direct", "rules": []}, {"chain": "PREROUTING_ZONES", "rules": [{"target": "PRE_public", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere", "options": "[goto] "}, {"target": "PRE_public", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere", "options": "[goto] "}]}, {"chain": "PREROUTING_ZONES_SOURCE", "rules": []}, {"chain": "PREROUTING_direct", "rules": []}, {"chain": "PRE_public", "rules": [{"target": "PRE_public_log", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PRE_public_deny", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PRE_public_allow", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "PRE_public_allow", "rules": []}, {"chain": "PRE_public_deny", "rules": []}]
|
||||
[{"chain": "PREROUTING", "rules": [{"target": "PREROUTING_direct", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PREROUTING_ZONES_SOURCE", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PREROUTING_ZONES", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "OUTPUT", "rules": [{"target": "OUTPUT_direct", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "OUTPUT_direct", "rules": []}, {"chain": "PREROUTING_ZONES", "rules": [{"target": "PRE_public", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere", "options": "[goto] "}, {"target": "PRE_public", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere", "options": "[goto] "}]}, {"chain": "PREROUTING_ZONES_SOURCE", "rules": []}, {"chain": "PREROUTING_direct", "rules": []}, {"chain": "PRE_public", "rules": [{"target": "PRE_public_log", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PRE_public_deny", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}, {"target": "PRE_public_allow", "prot": "all", "opt": null, "source": "anywhere", "destination": "anywhere"}]}, {"chain": "PRE_public_allow", "rules": []}, {"chain": "PRE_public_deny", "rules": []}, {"chain": "PRE_public_log", "rules": []}]
|
||||
|
||||
1
tests/fixtures/centos-7.7/ping-hostname-O-D-p-s.json
vendored
Normal file
1
tests/fixtures/centos-7.7/ping-hostname-O-D-p-s.json
vendored
Normal file
@@ -0,0 +1 @@
|
||||
{"destination_ip": "151.101.189.67", "data_bytes": 1400, "pattern": "0xabcd", "destination": "turner-tls.map.fastly.net", "packets_transmitted": 20, "packets_received": 20, "packet_loss_percent": 0.0, "duplicates": 0, "time_ms": 19146.0, "round_trip_ms_min": 28.96, "round_trip_ms_avg": 34.468, "round_trip_ms_max": 38.892, "round_trip_ms_stddev": 3.497, "responses": [{"type": "reply", "timestamp": 1594978465.914536, "bytes": 1408, "response_ip": "151.101.189.67", "icmp_seq": 1, "ttl": 59, "time_ms": 31.4, "duplicate": false}, {"type": "reply", "timestamp": 1594978465.993009, "bytes": 1408, "response_ip": "151.101.189.67", "icmp_seq": 2, "ttl": 59, "time_ms": 30.3, "duplicate": false}, {"type": "reply", "timestamp": 1594978467.010196, "bytes": 1408, "response_ip": "151.101.189.67", "icmp_seq": 3, "ttl": 59, "time_ms": 32.0, "duplicate": false}, {"type": "reply", "timestamp": 1594978468.033743, "bytes": 1408, "response_ip": "151.101.189.67", "icmp_seq": 4, "ttl": 59, "time_ms": 38.8, "duplicate": false}, {"type": "reply", "timestamp": 1594978469.051227, "bytes": 1408, "response_ip": "151.101.189.67", "icmp_seq": 5, "ttl": 59, "time_ms": 38.0, "duplicate": false}, {"type": "reply", "timestamp": 1594978470.048764, "bytes": 1408, "response_ip": "151.101.189.67", "icmp_seq": 6, "ttl": 59, "time_ms": 29.9, "duplicate": false}, {"type": "reply", "timestamp": 1594978471.051945, "bytes": 1408, "response_ip": "151.101.189.67", "icmp_seq": 7, "ttl": 59, "time_ms": 28.9, "duplicate": false}, {"type": "reply", "timestamp": 1594978472.064206, "bytes": 1408, "response_ip": "151.101.189.67", "icmp_seq": 8, "ttl": 59, "time_ms": 37.4, "duplicate": false}, {"type": "reply", "timestamp": 1594978473.062587, "bytes": 1408, "response_ip": "151.101.189.67", "icmp_seq": 9, "ttl": 59, "time_ms": 31.5, "duplicate": false}, {"type": "reply", "timestamp": 1594978474.074343, "bytes": 1408, "response_ip": "151.101.189.67", "icmp_seq": 10, "ttl": 59, "time_ms": 38.3, "duplicate": false}, {"type": "reply", "timestamp": 1594978475.079703, "bytes": 1408, "response_ip": "151.101.189.67", "icmp_seq": 11, "ttl": 59, "time_ms": 38.8, "duplicate": false}, {"type": "reply", "timestamp": 1594978476.076383, "bytes": 1408, "response_ip": "151.101.189.67", "icmp_seq": 12, "ttl": 59, "time_ms": 30.7, "duplicate": false}, {"type": "reply", "timestamp": 1594978477.084119, "bytes": 1408, "response_ip": "151.101.189.67", "icmp_seq": 13, "ttl": 59, "time_ms": 30.7, "duplicate": false}, {"type": "reply", "timestamp": 1594978478.092207, "bytes": 1408, "response_ip": "151.101.189.67", "icmp_seq": 14, "ttl": 59, "time_ms": 31.6, "duplicate": false}, {"type": "reply", "timestamp": 1594978479.104358, "bytes": 1408, "response_ip": "151.101.189.67", "icmp_seq": 15, "ttl": 59, "time_ms": 37.7, "duplicate": false}, {"type": "reply", "timestamp": 1594978480.106907, "bytes": 1408, "response_ip": "151.101.189.67", "icmp_seq": 16, "ttl": 59, "time_ms": 37.5, "duplicate": false}, {"type": "reply", "timestamp": 1594978481.11558, "bytes": 1408, "response_ip": "151.101.189.67", "icmp_seq": 17, "ttl": 59, "time_ms": 37.3, "duplicate": false}, {"type": "reply", "timestamp": 1594978482.119872, "bytes": 1408, "response_ip": "151.101.189.67", "icmp_seq": 18, "ttl": 59, "time_ms": 33.8, "duplicate": false}, {"type": "reply", "timestamp": 1594978483.131901, "bytes": 1408, "response_ip": "151.101.189.67", "icmp_seq": 19, "ttl": 59, "time_ms": 37.0, "duplicate": false}, {"type": "reply", "timestamp": 1594978484.141117, "bytes": 1408, "response_ip": "151.101.189.67", "icmp_seq": 20, "ttl": 59, "time_ms": 36.9, "duplicate": false}]}
|
||||
26
tests/fixtures/centos-7.7/ping-hostname-O-D-p-s.out
vendored
Normal file
26
tests/fixtures/centos-7.7/ping-hostname-O-D-p-s.out
vendored
Normal file
@@ -0,0 +1,26 @@
|
||||
PATTERN: 0xabcd
|
||||
PING turner-tls.map.fastly.net (151.101.189.67) 1400(1428) bytes of data.
|
||||
[1594978465.914536] 1408 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=1 ttl=59 time=31.4 ms
|
||||
[1594978465.993009] 1408 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=2 ttl=59 time=30.3 ms
|
||||
[1594978467.010196] 1408 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=3 ttl=59 time=32.0 ms
|
||||
[1594978468.033743] 1408 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=4 ttl=59 time=38.8 ms
|
||||
[1594978469.051227] 1408 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=5 ttl=59 time=38.0 ms
|
||||
[1594978470.048764] 1408 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=6 ttl=59 time=29.9 ms
|
||||
[1594978471.051945] 1408 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=7 ttl=59 time=28.9 ms
|
||||
[1594978472.064206] 1408 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=8 ttl=59 time=37.4 ms
|
||||
[1594978473.062587] 1408 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=9 ttl=59 time=31.5 ms
|
||||
[1594978474.074343] 1408 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=10 ttl=59 time=38.3 ms
|
||||
[1594978475.079703] 1408 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=11 ttl=59 time=38.8 ms
|
||||
[1594978476.076383] 1408 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=12 ttl=59 time=30.7 ms
|
||||
[1594978477.084119] 1408 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=13 ttl=59 time=30.7 ms
|
||||
[1594978478.092207] 1408 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=14 ttl=59 time=31.6 ms
|
||||
[1594978479.104358] 1408 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=15 ttl=59 time=37.7 ms
|
||||
[1594978480.106907] 1408 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=16 ttl=59 time=37.5 ms
|
||||
[1594978481.115580] 1408 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=17 ttl=59 time=37.3 ms
|
||||
[1594978482.119872] 1408 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=18 ttl=59 time=33.8 ms
|
||||
[1594978483.131901] 1408 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=19 ttl=59 time=37.0 ms
|
||||
[1594978484.141117] 1408 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=20 ttl=59 time=36.9 ms
|
||||
|
||||
--- turner-tls.map.fastly.net ping statistics ---
|
||||
20 packets transmitted, 20 received, 0% packet loss, time 19146ms
|
||||
rtt min/avg/max/mdev = 28.960/34.468/38.892/3.497 ms
|
||||
1
tests/fixtures/centos-7.7/ping-hostname-O-p.json
vendored
Normal file
1
tests/fixtures/centos-7.7/ping-hostname-O-p.json
vendored
Normal file
@@ -0,0 +1 @@
|
||||
{"destination_ip": "151.101.129.67", "data_bytes": 56, "pattern": "0xabcd", "destination": "turner-tls.map.fastly.net", "packets_transmitted": 20, "packets_received": 20, "packet_loss_percent": 0.0, "duplicates": 0, "time_ms": 19233.0, "round_trip_ms_min": 23.359, "round_trip_ms_avg": 28.495, "round_trip_ms_max": 33.979, "round_trip_ms_stddev": 4.081, "responses": [{"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.129.67", "icmp_seq": 1, "ttl": 59, "time_ms": 24.4, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.129.67", "icmp_seq": 2, "ttl": 59, "time_ms": 23.3, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.129.67", "icmp_seq": 3, "ttl": 59, "time_ms": 32.6, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.129.67", "icmp_seq": 4, "ttl": 59, "time_ms": 32.6, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.129.67", "icmp_seq": 5, "ttl": 59, "time_ms": 26.9, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.129.67", "icmp_seq": 6, "ttl": 59, "time_ms": 24.1, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.129.67", "icmp_seq": 7, "ttl": 59, "time_ms": 24.6, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.129.67", "icmp_seq": 8, "ttl": 59, "time_ms": 33.9, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.129.67", "icmp_seq": 9, "ttl": 59, "time_ms": 32.7, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.129.67", "icmp_seq": 10, "ttl": 59, "time_ms": 31.2, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.129.67", "icmp_seq": 11, "ttl": 59, "time_ms": 25.7, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.129.67", "icmp_seq": 12, "ttl": 59, "time_ms": 33.8, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.129.67", "icmp_seq": 13, "ttl": 59, "time_ms": 23.7, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.129.67", "icmp_seq": 14, "ttl": 59, "time_ms": 23.9, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.129.67", "icmp_seq": 15, "ttl": 59, "time_ms": 33.6, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.129.67", "icmp_seq": 16, "ttl": 59, "time_ms": 24.5, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.129.67", "icmp_seq": 17, "ttl": 59, "time_ms": 30.1, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.129.67", "icmp_seq": 18, "ttl": 59, "time_ms": 24.1, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.129.67", "icmp_seq": 19, "ttl": 59, "time_ms": 32.2, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.129.67", "icmp_seq": 20, "ttl": 59, "time_ms": 31.0, "duplicate": false}]}
|
||||
26
tests/fixtures/centos-7.7/ping-hostname-O-p.out
vendored
Normal file
26
tests/fixtures/centos-7.7/ping-hostname-O-p.out
vendored
Normal file
@@ -0,0 +1,26 @@
|
||||
PATTERN: 0xabcd
|
||||
PING turner-tls.map.fastly.net (151.101.129.67) 56(84) bytes of data.
|
||||
64 bytes from 151.101.129.67 (151.101.129.67): icmp_seq=1 ttl=59 time=24.4 ms
|
||||
64 bytes from 151.101.129.67 (151.101.129.67): icmp_seq=2 ttl=59 time=23.3 ms
|
||||
64 bytes from 151.101.129.67 (151.101.129.67): icmp_seq=3 ttl=59 time=32.6 ms
|
||||
64 bytes from 151.101.129.67 (151.101.129.67): icmp_seq=4 ttl=59 time=32.6 ms
|
||||
64 bytes from 151.101.129.67 (151.101.129.67): icmp_seq=5 ttl=59 time=26.9 ms
|
||||
64 bytes from 151.101.129.67 (151.101.129.67): icmp_seq=6 ttl=59 time=24.1 ms
|
||||
64 bytes from 151.101.129.67 (151.101.129.67): icmp_seq=7 ttl=59 time=24.6 ms
|
||||
64 bytes from 151.101.129.67 (151.101.129.67): icmp_seq=8 ttl=59 time=33.9 ms
|
||||
64 bytes from 151.101.129.67 (151.101.129.67): icmp_seq=9 ttl=59 time=32.7 ms
|
||||
64 bytes from 151.101.129.67 (151.101.129.67): icmp_seq=10 ttl=59 time=31.2 ms
|
||||
64 bytes from 151.101.129.67 (151.101.129.67): icmp_seq=11 ttl=59 time=25.7 ms
|
||||
64 bytes from 151.101.129.67 (151.101.129.67): icmp_seq=12 ttl=59 time=33.8 ms
|
||||
64 bytes from 151.101.129.67 (151.101.129.67): icmp_seq=13 ttl=59 time=23.7 ms
|
||||
64 bytes from 151.101.129.67 (151.101.129.67): icmp_seq=14 ttl=59 time=23.9 ms
|
||||
64 bytes from 151.101.129.67 (151.101.129.67): icmp_seq=15 ttl=59 time=33.6 ms
|
||||
64 bytes from 151.101.129.67 (151.101.129.67): icmp_seq=16 ttl=59 time=24.5 ms
|
||||
64 bytes from 151.101.129.67 (151.101.129.67): icmp_seq=17 ttl=59 time=30.1 ms
|
||||
64 bytes from 151.101.129.67 (151.101.129.67): icmp_seq=18 ttl=59 time=24.1 ms
|
||||
64 bytes from 151.101.129.67 (151.101.129.67): icmp_seq=19 ttl=59 time=32.2 ms
|
||||
64 bytes from 151.101.129.67 (151.101.129.67): icmp_seq=20 ttl=59 time=31.0 ms
|
||||
|
||||
--- turner-tls.map.fastly.net ping statistics ---
|
||||
20 packets transmitted, 20 received, 0% packet loss, time 19233ms
|
||||
rtt min/avg/max/mdev = 23.359/28.495/33.979/4.081 ms
|
||||
1
tests/fixtures/centos-7.7/ping-hostname-O.json
vendored
Normal file
1
tests/fixtures/centos-7.7/ping-hostname-O.json
vendored
Normal file
@@ -0,0 +1 @@
|
||||
{"destination_ip": "151.101.189.67", "data_bytes": 56, "pattern": null, "destination": "turner-tls.map.fastly.net", "packets_transmitted": 20, "packets_received": 19, "packet_loss_percent": 5.0, "duplicates": 0, "time_ms": 19125.0, "round_trip_ms_min": 27.656, "round_trip_ms_avg": 33.717, "round_trip_ms_max": 36.758, "round_trip_ms_stddev": 2.814, "responses": [{"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.189.67", "icmp_seq": 1, "ttl": 59, "time_ms": 29.6, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.189.67", "icmp_seq": 2, "ttl": 59, "time_ms": 30.1, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.189.67", "icmp_seq": 3, "ttl": 59, "time_ms": 35.5, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.189.67", "icmp_seq": 4, "ttl": 59, "time_ms": 35.5, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.189.67", "icmp_seq": 5, "ttl": 59, "time_ms": 34.9, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.189.67", "icmp_seq": 6, "ttl": 59, "time_ms": 29.9, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.189.67", "icmp_seq": 7, "ttl": 59, "time_ms": 27.6, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.189.67", "icmp_seq": 8, "ttl": 59, "time_ms": 28.6, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.189.67", "icmp_seq": 9, "ttl": 59, "time_ms": 35.2, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.189.67", "icmp_seq": 10, "ttl": 59, "time_ms": 34.4, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.189.67", "icmp_seq": 11, "ttl": 59, "time_ms": 35.9, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.189.67", "icmp_seq": 12, "ttl": 59, "time_ms": 35.8, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.189.67", "icmp_seq": 13, "ttl": 59, "time_ms": 34.4, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.189.67", "icmp_seq": 14, "ttl": 59, "time_ms": 35.5, "duplicate": false}, {"type": "timeout", "timestamp": null, "icmp_seq": 15}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.189.67", "icmp_seq": 16, "ttl": 59, "time_ms": 36.6, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.189.67", "icmp_seq": 17, "ttl": 59, "time_ms": 34.6, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.189.67", "icmp_seq": 18, "ttl": 59, "time_ms": 34.6, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.189.67", "icmp_seq": 19, "ttl": 59, "time_ms": 36.7, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "151.101.189.67", "icmp_seq": 20, "ttl": 59, "time_ms": 34.3, "duplicate": false}]}
|
||||
25
tests/fixtures/centos-7.7/ping-hostname-O.out
vendored
Normal file
25
tests/fixtures/centos-7.7/ping-hostname-O.out
vendored
Normal file
@@ -0,0 +1,25 @@
|
||||
PING turner-tls.map.fastly.net (151.101.189.67) 56(84) bytes of data.
|
||||
64 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=1 ttl=59 time=29.6 ms
|
||||
64 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=2 ttl=59 time=30.1 ms
|
||||
64 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=3 ttl=59 time=35.5 ms
|
||||
64 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=4 ttl=59 time=35.5 ms
|
||||
64 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=5 ttl=59 time=34.9 ms
|
||||
64 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=6 ttl=59 time=29.9 ms
|
||||
64 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=7 ttl=59 time=27.6 ms
|
||||
64 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=8 ttl=59 time=28.6 ms
|
||||
64 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=9 ttl=59 time=35.2 ms
|
||||
64 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=10 ttl=59 time=34.4 ms
|
||||
64 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=11 ttl=59 time=35.9 ms
|
||||
64 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=12 ttl=59 time=35.8 ms
|
||||
64 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=13 ttl=59 time=34.4 ms
|
||||
64 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=14 ttl=59 time=35.5 ms
|
||||
no answer yet for icmp_seq=15
|
||||
64 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=16 ttl=59 time=36.6 ms
|
||||
64 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=17 ttl=59 time=34.6 ms
|
||||
64 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=18 ttl=59 time=34.6 ms
|
||||
64 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=19 ttl=59 time=36.7 ms
|
||||
64 bytes from 151.101.189.67 (151.101.189.67): icmp_seq=20 ttl=59 time=34.3 ms
|
||||
|
||||
--- turner-tls.map.fastly.net ping statistics ---
|
||||
20 packets transmitted, 19 received, 5% packet loss, time 19125ms
|
||||
rtt min/avg/max/mdev = 27.656/33.717/36.758/2.814 ms
|
||||
1
tests/fixtures/centos-7.7/ping-ip-O-D.json
vendored
Normal file
1
tests/fixtures/centos-7.7/ping-ip-O-D.json
vendored
Normal file
@@ -0,0 +1 @@
|
||||
{"destination_ip": "127.0.0.1", "data_bytes": 56, "pattern": null, "destination": "127.0.0.1", "packets_transmitted": 20, "packets_received": 20, "packet_loss_percent": 0.0, "duplicates": 0, "time_ms": 19081.0, "round_trip_ms_min": 0.041, "round_trip_ms_avg": 0.048, "round_trip_ms_max": 0.081, "round_trip_ms_stddev": 0.009, "responses": [{"type": "reply", "timestamp": 1595037214.261953, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 1, "ttl": 64, "time_ms": 0.041, "duplicate": false}, {"type": "reply", "timestamp": 1595037215.264798, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 2, "ttl": 64, "time_ms": 0.048, "duplicate": false}, {"type": "reply", "timestamp": 1595037216.272296, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 3, "ttl": 64, "time_ms": 0.047, "duplicate": false}, {"type": "reply", "timestamp": 1595037217.275851, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 4, "ttl": 64, "time_ms": 0.062, "duplicate": false}, {"type": "reply", "timestamp": 1595037218.284242, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 5, "ttl": 64, "time_ms": 0.045, "duplicate": false}, {"type": "reply", "timestamp": 1595037219.283712, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 6, "ttl": 64, "time_ms": 0.043, "duplicate": false}, {"type": "reply", "timestamp": 1595037220.290949, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 7, "ttl": 64, "time_ms": 0.046, "duplicate": false}, {"type": "reply", "timestamp": 1595037221.295962, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 8, "ttl": 64, "time_ms": 0.044, "duplicate": false}, {"type": "reply", "timestamp": 1595037222.30702, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 9, "ttl": 64, "time_ms": 0.048, "duplicate": false}, {"type": "reply", "timestamp": 1595037223.313919, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 10, "ttl": 64, "time_ms": 0.081, "duplicate": false}, {"type": "reply", "timestamp": 1595037224.313679, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 11, "ttl": 64, "time_ms": 0.043, "duplicate": false}, {"type": "reply", "timestamp": 1595037225.320748, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 12, "ttl": 64, "time_ms": 0.044, "duplicate": false}, {"type": "reply", "timestamp": 1595037226.324322, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 13, "ttl": 64, "time_ms": 0.045, "duplicate": false}, {"type": "reply", "timestamp": 1595037227.325835, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 14, "ttl": 64, "time_ms": 0.046, "duplicate": false}, {"type": "reply", "timestamp": 1595037228.327028, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 15, "ttl": 64, "time_ms": 0.046, "duplicate": false}, {"type": "reply", "timestamp": 1595037229.329891, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 16, "ttl": 64, "time_ms": 0.052, "duplicate": false}, {"type": "reply", "timestamp": 1595037230.333891, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 17, "ttl": 64, "time_ms": 0.044, "duplicate": false}, {"type": "reply", "timestamp": 1595037231.338137, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 18, "ttl": 64, "time_ms": 0.046, "duplicate": false}, {"type": "reply", "timestamp": 1595037232.340475, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 19, "ttl": 64, "time_ms": 0.048, "duplicate": false}, {"type": "reply", "timestamp": 1595037233.343058, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 20, "ttl": 64, "time_ms": 0.045, "duplicate": false}]}
|
||||
25
tests/fixtures/centos-7.7/ping-ip-O-D.out
vendored
Normal file
25
tests/fixtures/centos-7.7/ping-ip-O-D.out
vendored
Normal file
@@ -0,0 +1,25 @@
|
||||
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
|
||||
[1595037214.261953] 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms
|
||||
[1595037215.264798] 64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.048 ms
|
||||
[1595037216.272296] 64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.047 ms
|
||||
[1595037217.275851] 64 bytes from 127.0.0.1: icmp_seq=4 ttl=64 time=0.062 ms
|
||||
[1595037218.284242] 64 bytes from 127.0.0.1: icmp_seq=5 ttl=64 time=0.045 ms
|
||||
[1595037219.283712] 64 bytes from 127.0.0.1: icmp_seq=6 ttl=64 time=0.043 ms
|
||||
[1595037220.290949] 64 bytes from 127.0.0.1: icmp_seq=7 ttl=64 time=0.046 ms
|
||||
[1595037221.295962] 64 bytes from 127.0.0.1: icmp_seq=8 ttl=64 time=0.044 ms
|
||||
[1595037222.307020] 64 bytes from 127.0.0.1: icmp_seq=9 ttl=64 time=0.048 ms
|
||||
[1595037223.313919] 64 bytes from 127.0.0.1: icmp_seq=10 ttl=64 time=0.081 ms
|
||||
[1595037224.313679] 64 bytes from 127.0.0.1: icmp_seq=11 ttl=64 time=0.043 ms
|
||||
[1595037225.320748] 64 bytes from 127.0.0.1: icmp_seq=12 ttl=64 time=0.044 ms
|
||||
[1595037226.324322] 64 bytes from 127.0.0.1: icmp_seq=13 ttl=64 time=0.045 ms
|
||||
[1595037227.325835] 64 bytes from 127.0.0.1: icmp_seq=14 ttl=64 time=0.046 ms
|
||||
[1595037228.327028] 64 bytes from 127.0.0.1: icmp_seq=15 ttl=64 time=0.046 ms
|
||||
[1595037229.329891] 64 bytes from 127.0.0.1: icmp_seq=16 ttl=64 time=0.052 ms
|
||||
[1595037230.333891] 64 bytes from 127.0.0.1: icmp_seq=17 ttl=64 time=0.044 ms
|
||||
[1595037231.338137] 64 bytes from 127.0.0.1: icmp_seq=18 ttl=64 time=0.046 ms
|
||||
[1595037232.340475] 64 bytes from 127.0.0.1: icmp_seq=19 ttl=64 time=0.048 ms
|
||||
[1595037233.343058] 64 bytes from 127.0.0.1: icmp_seq=20 ttl=64 time=0.045 ms
|
||||
|
||||
--- 127.0.0.1 ping statistics ---
|
||||
20 packets transmitted, 20 received, 0% packet loss, time 19081ms
|
||||
rtt min/avg/max/mdev = 0.041/0.048/0.081/0.009 ms
|
||||
1
tests/fixtures/centos-7.7/ping-ip-O.json
vendored
Normal file
1
tests/fixtures/centos-7.7/ping-ip-O.json
vendored
Normal file
@@ -0,0 +1 @@
|
||||
{"destination_ip": "127.0.0.1", "data_bytes": 56, "pattern": null, "destination": "127.0.0.1", "packets_transmitted": 20, "packets_received": 20, "packet_loss_percent": 0.0, "duplicates": 0, "time_ms": 19070.0, "round_trip_ms_min": 0.038, "round_trip_ms_avg": 0.047, "round_trip_ms_max": 0.08, "round_trip_ms_stddev": 0.011, "responses": [{"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 1, "ttl": 64, "time_ms": 0.038, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 2, "ttl": 64, "time_ms": 0.043, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 3, "ttl": 64, "time_ms": 0.044, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 4, "ttl": 64, "time_ms": 0.052, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 5, "ttl": 64, "time_ms": 0.08, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 6, "ttl": 64, "time_ms": 0.043, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 7, "ttl": 64, "time_ms": 0.047, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 8, "ttl": 64, "time_ms": 0.04, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 9, "ttl": 64, "time_ms": 0.052, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 10, "ttl": 64, "time_ms": 0.044, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 11, "ttl": 64, "time_ms": 0.043, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 12, "ttl": 64, "time_ms": 0.043, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 13, "ttl": 64, "time_ms": 0.05, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 14, "ttl": 64, "time_ms": 0.045, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 15, "ttl": 64, "time_ms": 0.062, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 16, "ttl": 64, "time_ms": 0.046, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 17, "ttl": 64, "time_ms": 0.046, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 18, "ttl": 64, "time_ms": 0.045, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 19, "ttl": 64, "time_ms": 0.044, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "127.0.0.1", "icmp_seq": 20, "ttl": 64, "time_ms": 0.044, "duplicate": false}]}
|
||||
25
tests/fixtures/centos-7.7/ping-ip-O.out
vendored
Normal file
25
tests/fixtures/centos-7.7/ping-ip-O.out
vendored
Normal file
@@ -0,0 +1,25 @@
|
||||
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
|
||||
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms
|
||||
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.043 ms
|
||||
64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.044 ms
|
||||
64 bytes from 127.0.0.1: icmp_seq=4 ttl=64 time=0.052 ms
|
||||
64 bytes from 127.0.0.1: icmp_seq=5 ttl=64 time=0.080 ms
|
||||
64 bytes from 127.0.0.1: icmp_seq=6 ttl=64 time=0.043 ms
|
||||
64 bytes from 127.0.0.1: icmp_seq=7 ttl=64 time=0.047 ms
|
||||
64 bytes from 127.0.0.1: icmp_seq=8 ttl=64 time=0.040 ms
|
||||
64 bytes from 127.0.0.1: icmp_seq=9 ttl=64 time=0.052 ms
|
||||
64 bytes from 127.0.0.1: icmp_seq=10 ttl=64 time=0.044 ms
|
||||
64 bytes from 127.0.0.1: icmp_seq=11 ttl=64 time=0.043 ms
|
||||
64 bytes from 127.0.0.1: icmp_seq=12 ttl=64 time=0.043 ms
|
||||
64 bytes from 127.0.0.1: icmp_seq=13 ttl=64 time=0.050 ms
|
||||
64 bytes from 127.0.0.1: icmp_seq=14 ttl=64 time=0.045 ms
|
||||
64 bytes from 127.0.0.1: icmp_seq=15 ttl=64 time=0.062 ms
|
||||
64 bytes from 127.0.0.1: icmp_seq=16 ttl=64 time=0.046 ms
|
||||
64 bytes from 127.0.0.1: icmp_seq=17 ttl=64 time=0.046 ms
|
||||
64 bytes from 127.0.0.1: icmp_seq=18 ttl=64 time=0.045 ms
|
||||
64 bytes from 127.0.0.1: icmp_seq=19 ttl=64 time=0.044 ms
|
||||
64 bytes from 127.0.0.1: icmp_seq=20 ttl=64 time=0.044 ms
|
||||
|
||||
--- 127.0.0.1 ping statistics ---
|
||||
20 packets transmitted, 20 received, 0% packet loss, time 19070ms
|
||||
rtt min/avg/max/mdev = 0.038/0.047/0.080/0.011 ms
|
||||
1
tests/fixtures/centos-7.7/ping-ip-dup.json
vendored
Normal file
1
tests/fixtures/centos-7.7/ping-ip-dup.json
vendored
Normal file
@@ -0,0 +1 @@
|
||||
{"destination_ip": "192.168.1.255", "data_bytes": 56, "pattern": null, "destination": "192.168.1.255", "packets_transmitted": 2, "packets_received": 2, "packet_loss_percent": 0.0, "duplicates": 19, "time_ms": 1013.0, "round_trip_ms_min": 0.586, "round_trip_ms_avg": 504.26, "round_trip_ms_max": 1276.448, "round_trip_ms_stddev": 417.208, "responses": [{"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "192.168.1.221", "icmp_seq": 1, "ttl": 64, "time_ms": 0.586, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "192.168.1.88", "icmp_seq": 1, "ttl": 64, "time_ms": 382.0, "duplicate": true}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "192.168.1.78", "icmp_seq": 1, "ttl": 128, "time_ms": 382.0, "duplicate": true}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "192.168.1.217", "icmp_seq": 1, "ttl": 255, "time_ms": 387.0, "duplicate": true}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "192.168.1.186", "icmp_seq": 1, "ttl": 64, "time_ms": 389.0, "duplicate": true}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "192.168.1.89", "icmp_seq": 1, "ttl": 64, "time_ms": 389.0, "duplicate": true}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "192.168.1.75", "icmp_seq": 1, "ttl": 64, "time_ms": 584.0, "duplicate": true}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "192.168.1.221", "icmp_seq": 2, "ttl": 64, "time_ms": 0.861, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "192.168.1.78", "icmp_seq": 2, "ttl": 128, "time_ms": 4.17, "duplicate": true}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "192.168.1.88", "icmp_seq": 2, "ttl": 64, "time_ms": 4.19, "duplicate": true}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "192.168.1.89", "icmp_seq": 2, "ttl": 64, "time_ms": 12.7, "duplicate": true}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "192.168.1.81", "icmp_seq": 1, "ttl": 64, "time_ms": 1029.0, "duplicate": true}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "192.168.1.72", "icmp_seq": 1, "ttl": 64, "time_ms": 1276.0, "duplicate": true}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "192.168.1.251", "icmp_seq": 1, "ttl": 64, "time_ms": 1276.0, "duplicate": true}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "192.168.1.251", "icmp_seq": 2, "ttl": 64, "time_ms": 262.0, "duplicate": true}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "192.168.1.72", "icmp_seq": 2, "ttl": 64, "time_ms": 263.0, "duplicate": true}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "192.168.1.246", "icmp_seq": 2, "ttl": 255, "time_ms": 263.0, "duplicate": true}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "192.168.1.217", "icmp_seq": 2, "ttl": 255, "time_ms": 919.0, "duplicate": true}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "192.168.1.186", "icmp_seq": 2, "ttl": 64, "time_ms": 919.0, "duplicate": true}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "192.168.1.75", "icmp_seq": 2, "ttl": 64, "time_ms": 919.0, "duplicate": true}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "192.168.1.81", "icmp_seq": 2, "ttl": 64, "time_ms": 919.0, "duplicate": true}]}
|
||||
27
tests/fixtures/centos-7.7/ping-ip-dup.out
vendored
Normal file
27
tests/fixtures/centos-7.7/ping-ip-dup.out
vendored
Normal file
@@ -0,0 +1,27 @@
|
||||
WARNING: pinging broadcast address
|
||||
PING 192.168.1.255 (192.168.1.255) 56(84) bytes of data.
|
||||
64 bytes from 192.168.1.221: icmp_seq=1 ttl=64 time=0.586 ms
|
||||
64 bytes from 192.168.1.88: icmp_seq=1 ttl=64 time=382 ms (DUP!)
|
||||
64 bytes from 192.168.1.78: icmp_seq=1 ttl=128 time=382 ms (DUP!)
|
||||
64 bytes from 192.168.1.217: icmp_seq=1 ttl=255 time=387 ms (DUP!)
|
||||
64 bytes from 192.168.1.186: icmp_seq=1 ttl=64 time=389 ms (DUP!)
|
||||
64 bytes from 192.168.1.89: icmp_seq=1 ttl=64 time=389 ms (DUP!)
|
||||
64 bytes from 192.168.1.75: icmp_seq=1 ttl=64 time=584 ms (DUP!)
|
||||
64 bytes from 192.168.1.221: icmp_seq=2 ttl=64 time=0.861 ms
|
||||
64 bytes from 192.168.1.78: icmp_seq=2 ttl=128 time=4.17 ms (DUP!)
|
||||
64 bytes from 192.168.1.88: icmp_seq=2 ttl=64 time=4.19 ms (DUP!)
|
||||
64 bytes from 192.168.1.89: icmp_seq=2 ttl=64 time=12.7 ms (DUP!)
|
||||
64 bytes from 192.168.1.81: icmp_seq=1 ttl=64 time=1029 ms (DUP!)
|
||||
64 bytes from 192.168.1.72: icmp_seq=1 ttl=64 time=1276 ms (DUP!)
|
||||
64 bytes from 192.168.1.251: icmp_seq=1 ttl=64 time=1276 ms (DUP!)
|
||||
64 bytes from 192.168.1.251: icmp_seq=2 ttl=64 time=262 ms (DUP!)
|
||||
64 bytes from 192.168.1.72: icmp_seq=2 ttl=64 time=263 ms (DUP!)
|
||||
64 bytes from 192.168.1.246: icmp_seq=2 ttl=255 time=263 ms (DUP!)
|
||||
64 bytes from 192.168.1.217: icmp_seq=2 ttl=255 time=919 ms (DUP!)
|
||||
64 bytes from 192.168.1.186: icmp_seq=2 ttl=64 time=919 ms (DUP!)
|
||||
64 bytes from 192.168.1.75: icmp_seq=2 ttl=64 time=919 ms (DUP!)
|
||||
64 bytes from 192.168.1.81: icmp_seq=2 ttl=64 time=919 ms (DUP!)
|
||||
|
||||
--- 192.168.1.255 ping statistics ---
|
||||
2 packets transmitted, 2 received, +19 duplicates, 0% packet loss, time 1013ms
|
||||
rtt min/avg/max/mdev = 0.586/504.260/1276.448/417.208 ms, pipe 2
|
||||
1
tests/fixtures/centos-7.7/ping6-hostname-O-D-p-s.json
vendored
Normal file
1
tests/fixtures/centos-7.7/ping6-hostname-O-D-p-s.json
vendored
Normal file
@@ -0,0 +1 @@
|
||||
{"destination_ip": "2a04:4e42:2d::323", "data_bytes": 1400, "pattern": "0xabcd", "destination": "www.cnn.com", "packets_transmitted": 20, "packets_received": 20, "packet_loss_percent": 0.0, "duplicates": 0, "time_ms": 19077.0, "round_trip_ms_min": 31.845, "round_trip_ms_avg": 39.274, "round_trip_ms_max": 43.243, "round_trip_ms_stddev": 3.522, "responses": [{"type": "reply", "timestamp": 1594978345.609669, "bytes": 1408, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 1, "ttl": 59, "time_ms": 32.4, "duplicate": false}, {"type": "reply", "timestamp": 1594978346.58542, "bytes": 1408, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 2, "ttl": 59, "time_ms": 39.9, "duplicate": false}, {"type": "reply", "timestamp": 1594978347.594128, "bytes": 1408, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 3, "ttl": 59, "time_ms": 42.3, "duplicate": false}, {"type": "reply", "timestamp": 1594978348.595221, "bytes": 1408, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 4, "ttl": 59, "time_ms": 40.2, "duplicate": false}, {"type": "reply", "timestamp": 1594978349.600372, "bytes": 1408, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 5, "ttl": 59, "time_ms": 43.2, "duplicate": false}, {"type": "reply", "timestamp": 1594978350.590676, "bytes": 1408, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 6, "ttl": 59, "time_ms": 31.8, "duplicate": false}, {"type": "reply", "timestamp": 1594978351.601527, "bytes": 1408, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 7, "ttl": 59, "time_ms": 41.8, "duplicate": false}, {"type": "reply", "timestamp": 1594978352.604195, "bytes": 1408, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 8, "ttl": 59, "time_ms": 41.7, "duplicate": false}, {"type": "reply", "timestamp": 1594978353.607212, "bytes": 1408, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 9, "ttl": 59, "time_ms": 42.0, "duplicate": false}, {"type": "reply", "timestamp": 1594978354.610771, "bytes": 1408, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 10, "ttl": 59, "time_ms": 40.7, "duplicate": false}, {"type": "reply", "timestamp": 1594978355.613729, "bytes": 1408, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 11, "ttl": 59, "time_ms": 40.4, "duplicate": false}, {"type": "reply", "timestamp": 1594978356.611887, "bytes": 1408, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 12, "ttl": 59, "time_ms": 32.6, "duplicate": false}, {"type": "reply", "timestamp": 1594978357.62481, "bytes": 1408, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 13, "ttl": 59, "time_ms": 40.1, "duplicate": false}, {"type": "reply", "timestamp": 1594978358.629185, "bytes": 1408, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 14, "ttl": 59, "time_ms": 42.0, "duplicate": false}, {"type": "reply", "timestamp": 1594978359.634854, "bytes": 1408, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 15, "ttl": 59, "time_ms": 41.2, "duplicate": false}, {"type": "reply", "timestamp": 1594978360.638344, "bytes": 1408, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 16, "ttl": 59, "time_ms": 40.6, "duplicate": false}, {"type": "reply", "timestamp": 1594978361.640968, "bytes": 1408, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 17, "ttl": 59, "time_ms": 40.7, "duplicate": false}, {"type": "reply", "timestamp": 1594978362.645739, "bytes": 1408, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 18, "ttl": 59, "time_ms": 39.9, "duplicate": false}, {"type": "reply", "timestamp": 1594978363.6467, "bytes": 1408, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 19, "ttl": 59, "time_ms": 37.5, "duplicate": false}, {"type": "reply", "timestamp": 1594978364.650853, "bytes": 1408, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 20, "ttl": 59, "time_ms": 33.6, "duplicate": false}]}
|
||||
26
tests/fixtures/centos-7.7/ping6-hostname-O-D-p-s.out
vendored
Normal file
26
tests/fixtures/centos-7.7/ping6-hostname-O-D-p-s.out
vendored
Normal file
@@ -0,0 +1,26 @@
|
||||
PATTERN: 0xabcd
|
||||
PING www.cnn.com(2a04:4e42:2d::323 (2a04:4e42:2d::323)) 1400 data bytes
|
||||
[1594978345.609669] 1408 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=1 ttl=59 time=32.4 ms
|
||||
[1594978346.585420] 1408 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=2 ttl=59 time=39.9 ms
|
||||
[1594978347.594128] 1408 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=3 ttl=59 time=42.3 ms
|
||||
[1594978348.595221] 1408 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=4 ttl=59 time=40.2 ms
|
||||
[1594978349.600372] 1408 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=5 ttl=59 time=43.2 ms
|
||||
[1594978350.590676] 1408 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=6 ttl=59 time=31.8 ms
|
||||
[1594978351.601527] 1408 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=7 ttl=59 time=41.8 ms
|
||||
[1594978352.604195] 1408 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=8 ttl=59 time=41.7 ms
|
||||
[1594978353.607212] 1408 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=9 ttl=59 time=42.0 ms
|
||||
[1594978354.610771] 1408 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=10 ttl=59 time=40.7 ms
|
||||
[1594978355.613729] 1408 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=11 ttl=59 time=40.4 ms
|
||||
[1594978356.611887] 1408 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=12 ttl=59 time=32.6 ms
|
||||
[1594978357.624810] 1408 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=13 ttl=59 time=40.1 ms
|
||||
[1594978358.629185] 1408 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=14 ttl=59 time=42.0 ms
|
||||
[1594978359.634854] 1408 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=15 ttl=59 time=41.2 ms
|
||||
[1594978360.638344] 1408 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=16 ttl=59 time=40.6 ms
|
||||
[1594978361.640968] 1408 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=17 ttl=59 time=40.7 ms
|
||||
[1594978362.645739] 1408 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=18 ttl=59 time=39.9 ms
|
||||
[1594978363.646700] 1408 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=19 ttl=59 time=37.5 ms
|
||||
[1594978364.650853] 1408 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=20 ttl=59 time=33.6 ms
|
||||
|
||||
--- www.cnn.com ping statistics ---
|
||||
20 packets transmitted, 20 received, 0% packet loss, time 19077ms
|
||||
rtt min/avg/max/mdev = 31.845/39.274/43.243/3.522 ms
|
||||
1
tests/fixtures/centos-7.7/ping6-hostname-O-p.json
vendored
Normal file
1
tests/fixtures/centos-7.7/ping6-hostname-O-p.json
vendored
Normal file
@@ -0,0 +1 @@
|
||||
{"destination_ip": "2a04:4e42:2d::323", "data_bytes": 56, "pattern": "0xabcd", "destination": "www.cnn.com", "packets_transmitted": 20, "packets_received": 20, "packet_loss_percent": 0.0, "duplicates": 0, "time_ms": 19164.0, "round_trip_ms_min": 30.757, "round_trip_ms_avg": 37.455, "round_trip_ms_max": 42.652, "round_trip_ms_stddev": 3.338, "responses": [{"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 1, "ttl": 59, "time_ms": 30.9, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 2, "ttl": 59, "time_ms": 39.0, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 3, "ttl": 59, "time_ms": 32.6, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 4, "ttl": 59, "time_ms": 38.4, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 5, "ttl": 59, "time_ms": 38.8, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 6, "ttl": 59, "time_ms": 42.6, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 7, "ttl": 59, "time_ms": 30.7, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 8, "ttl": 59, "time_ms": 39.4, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 9, "ttl": 59, "time_ms": 39.3, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 10, "ttl": 59, "time_ms": 38.9, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 11, "ttl": 59, "time_ms": 38.6, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 12, "ttl": 59, "time_ms": 38.2, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 13, "ttl": 59, "time_ms": 39.6, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 14, "ttl": 59, "time_ms": 37.4, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 15, "ttl": 59, "time_ms": 33.7, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 16, "ttl": 59, "time_ms": 39.4, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 17, "ttl": 59, "time_ms": 38.9, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 18, "ttl": 59, "time_ms": 41.3, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 19, "ttl": 59, "time_ms": 32.2, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:2d::323", "icmp_seq": 20, "ttl": 59, "time_ms": 38.4, "duplicate": false}]}
|
||||
26
tests/fixtures/centos-7.7/ping6-hostname-O-p.out
vendored
Normal file
26
tests/fixtures/centos-7.7/ping6-hostname-O-p.out
vendored
Normal file
@@ -0,0 +1,26 @@
|
||||
PATTERN: 0xabcd
|
||||
PING www.cnn.com(2a04:4e42:2d::323 (2a04:4e42:2d::323)) 56 data bytes
|
||||
64 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=1 ttl=59 time=30.9 ms
|
||||
64 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=2 ttl=59 time=39.0 ms
|
||||
64 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=3 ttl=59 time=32.6 ms
|
||||
64 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=4 ttl=59 time=38.4 ms
|
||||
64 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=5 ttl=59 time=38.8 ms
|
||||
64 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=6 ttl=59 time=42.6 ms
|
||||
64 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=7 ttl=59 time=30.7 ms
|
||||
64 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=8 ttl=59 time=39.4 ms
|
||||
64 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=9 ttl=59 time=39.3 ms
|
||||
64 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=10 ttl=59 time=38.9 ms
|
||||
64 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=11 ttl=59 time=38.6 ms
|
||||
64 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=12 ttl=59 time=38.2 ms
|
||||
64 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=13 ttl=59 time=39.6 ms
|
||||
64 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=14 ttl=59 time=37.4 ms
|
||||
64 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=15 ttl=59 time=33.7 ms
|
||||
64 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=16 ttl=59 time=39.4 ms
|
||||
64 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=17 ttl=59 time=38.9 ms
|
||||
64 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=18 ttl=59 time=41.3 ms
|
||||
64 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=19 ttl=59 time=32.2 ms
|
||||
64 bytes from 2a04:4e42:2d::323 (2a04:4e42:2d::323): icmp_seq=20 ttl=59 time=38.4 ms
|
||||
|
||||
--- www.cnn.com ping statistics ---
|
||||
20 packets transmitted, 20 received, 0% packet loss, time 19164ms
|
||||
rtt min/avg/max/mdev = 30.757/37.455/42.652/3.338 ms
|
||||
1
tests/fixtures/centos-7.7/ping6-ip-O-D-p.json
vendored
Normal file
1
tests/fixtures/centos-7.7/ping6-ip-O-D-p.json
vendored
Normal file
@@ -0,0 +1 @@
|
||||
{"destination_ip": "2a04:4e42:600::323", "data_bytes": 56, "pattern": "0xabcd", "destination": "2a04:4e42:600::323", "packets_transmitted": 20, "packets_received": 19, "packet_loss_percent": 5.0, "duplicates": 0, "time_ms": 19074.0, "round_trip_ms_min": 28.15, "round_trip_ms_avg": 33.534, "round_trip_ms_max": 39.843, "round_trip_ms_stddev": 3.489, "responses": [{"type": "reply", "timestamp": 1594976827.240914, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 1, "ttl": 59, "time_ms": 28.7, "duplicate": false}, {"type": "reply", "timestamp": 1594976828.25493, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 2, "ttl": 59, "time_ms": 37.2, "duplicate": false}, {"type": "reply", "timestamp": 1594976829.252877, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 3, "ttl": 59, "time_ms": 29.7, "duplicate": false}, {"type": "reply", "timestamp": 1594976830.262654, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 4, "ttl": 59, "time_ms": 37.7, "duplicate": false}, {"type": "reply", "timestamp": 1594976831.265626, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 5, "ttl": 59, "time_ms": 34.8, "duplicate": false}, {"type": "reply", "timestamp": 1594976832.269834, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 6, "ttl": 59, "time_ms": 35.6, "duplicate": false}, {"type": "reply", "timestamp": 1594976833.268059, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 7, "ttl": 59, "time_ms": 28.4, "duplicate": false}, {"type": "reply", "timestamp": 1594976834.274292, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 8, "ttl": 59, "time_ms": 28.1, "duplicate": false}, {"type": "reply", "timestamp": 1594976835.287123, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 9, "ttl": 59, "time_ms": 34.9, "duplicate": false}, {"type": "reply", "timestamp": 1594976836.287707, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 10, "ttl": 59, "time_ms": 34.4, "duplicate": false}, {"type": "reply", "timestamp": 1594976837.290589, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 11, "ttl": 59, "time_ms": 35.2, "duplicate": false}, {"type": "reply", "timestamp": 1594976838.293514, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 12, "ttl": 59, "time_ms": 35.4, "duplicate": false}, {"type": "reply", "timestamp": 1594976839.290914, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 13, "ttl": 59, "time_ms": 29.8, "duplicate": false}, {"type": "reply", "timestamp": 1594976840.292897, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 14, "ttl": 59, "time_ms": 28.5, "duplicate": false}, {"type": "timeout", "timestamp": 1594976842.269238, "icmp_seq": 15}, {"type": "reply", "timestamp": 1594976842.30145, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 16, "ttl": 59, "time_ms": 31.8, "duplicate": false}, {"type": "reply", "timestamp": 1594976843.312998, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 17, "ttl": 59, "time_ms": 39.8, "duplicate": false}, {"type": "reply", "timestamp": 1594976844.314228, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 18, "ttl": 59, "time_ms": 35.7, "duplicate": false}, {"type": "reply", "timestamp": 1594976845.315518, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 19, "ttl": 59, "time_ms": 35.1, "duplicate": false}, {"type": "reply", "timestamp": 1594976846.321706, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 20, "ttl": 59, "time_ms": 35.4, "duplicate": false}]}
|
||||
26
tests/fixtures/centos-7.7/ping6-ip-O-D-p.out
vendored
Normal file
26
tests/fixtures/centos-7.7/ping6-ip-O-D-p.out
vendored
Normal file
@@ -0,0 +1,26 @@
|
||||
PATTERN: 0xabcd
|
||||
PING 2a04:4e42:600::323(2a04:4e42:600::323) 56 data bytes
|
||||
[1594976827.240914] 64 bytes from 2a04:4e42:600::323: icmp_seq=1 ttl=59 time=28.7 ms
|
||||
[1594976828.254930] 64 bytes from 2a04:4e42:600::323: icmp_seq=2 ttl=59 time=37.2 ms
|
||||
[1594976829.252877] 64 bytes from 2a04:4e42:600::323: icmp_seq=3 ttl=59 time=29.7 ms
|
||||
[1594976830.262654] 64 bytes from 2a04:4e42:600::323: icmp_seq=4 ttl=59 time=37.7 ms
|
||||
[1594976831.265626] 64 bytes from 2a04:4e42:600::323: icmp_seq=5 ttl=59 time=34.8 ms
|
||||
[1594976832.269834] 64 bytes from 2a04:4e42:600::323: icmp_seq=6 ttl=59 time=35.6 ms
|
||||
[1594976833.268059] 64 bytes from 2a04:4e42:600::323: icmp_seq=7 ttl=59 time=28.4 ms
|
||||
[1594976834.274292] 64 bytes from 2a04:4e42:600::323: icmp_seq=8 ttl=59 time=28.1 ms
|
||||
[1594976835.287123] 64 bytes from 2a04:4e42:600::323: icmp_seq=9 ttl=59 time=34.9 ms
|
||||
[1594976836.287707] 64 bytes from 2a04:4e42:600::323: icmp_seq=10 ttl=59 time=34.4 ms
|
||||
[1594976837.290589] 64 bytes from 2a04:4e42:600::323: icmp_seq=11 ttl=59 time=35.2 ms
|
||||
[1594976838.293514] 64 bytes from 2a04:4e42:600::323: icmp_seq=12 ttl=59 time=35.4 ms
|
||||
[1594976839.290914] 64 bytes from 2a04:4e42:600::323: icmp_seq=13 ttl=59 time=29.8 ms
|
||||
[1594976840.292897] 64 bytes from 2a04:4e42:600::323: icmp_seq=14 ttl=59 time=28.5 ms
|
||||
[1594976842.269238] no answer yet for icmp_seq=15
|
||||
[1594976842.301450] 64 bytes from 2a04:4e42:600::323: icmp_seq=16 ttl=59 time=31.8 ms
|
||||
[1594976843.312998] 64 bytes from 2a04:4e42:600::323: icmp_seq=17 ttl=59 time=39.8 ms
|
||||
[1594976844.314228] 64 bytes from 2a04:4e42:600::323: icmp_seq=18 ttl=59 time=35.7 ms
|
||||
[1594976845.315518] 64 bytes from 2a04:4e42:600::323: icmp_seq=19 ttl=59 time=35.1 ms
|
||||
[1594976846.321706] 64 bytes from 2a04:4e42:600::323: icmp_seq=20 ttl=59 time=35.4 ms
|
||||
|
||||
--- 2a04:4e42:600::323 ping statistics ---
|
||||
20 packets transmitted, 19 received, 5% packet loss, time 19074ms
|
||||
rtt min/avg/max/mdev = 28.150/33.534/39.843/3.489 ms
|
||||
1
tests/fixtures/centos-7.7/ping6-ip-O-p.json
vendored
Normal file
1
tests/fixtures/centos-7.7/ping6-ip-O-p.json
vendored
Normal file
@@ -0,0 +1 @@
|
||||
{"destination_ip": "2a04:4e42:600::323", "data_bytes": 56, "pattern": "0xabcd", "destination": "2a04:4e42:600::323", "packets_transmitted": 20, "packets_received": 19, "packet_loss_percent": 5.0, "duplicates": 0, "time_ms": 19067.0, "round_trip_ms_min": 27.064, "round_trip_ms_avg": 33.626, "round_trip_ms_max": 38.146, "round_trip_ms_stddev": 3.803, "responses": [{"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 1, "ttl": 59, "time_ms": 27.9, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 2, "ttl": 59, "time_ms": 28.4, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 3, "ttl": 59, "time_ms": 36.0, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 4, "ttl": 59, "time_ms": 28.5, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 5, "ttl": 59, "time_ms": 35.8, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 6, "ttl": 59, "time_ms": 34.4, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 7, "ttl": 59, "time_ms": 30.7, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 8, "ttl": 59, "time_ms": 28.5, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 9, "ttl": 59, "time_ms": 36.5, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 10, "ttl": 59, "time_ms": 36.3, "duplicate": false}, {"type": "timeout", "timestamp": null, "icmp_seq": 11}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 12, "ttl": 59, "time_ms": 37.4, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 13, "ttl": 59, "time_ms": 30.7, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 14, "ttl": 59, "time_ms": 36.5, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 15, "ttl": 59, "time_ms": 35.4, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 16, "ttl": 59, "time_ms": 36.3, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 17, "ttl": 59, "time_ms": 37.5, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 18, "ttl": 59, "time_ms": 36.2, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 19, "ttl": 59, "time_ms": 27.0, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "2a04:4e42:600::323", "icmp_seq": 20, "ttl": 59, "time_ms": 38.1, "duplicate": false}]}
|
||||
26
tests/fixtures/centos-7.7/ping6-ip-O-p.out
vendored
Normal file
26
tests/fixtures/centos-7.7/ping6-ip-O-p.out
vendored
Normal file
@@ -0,0 +1,26 @@
|
||||
PATTERN: 0xabcd
|
||||
PING 2a04:4e42:600::323(2a04:4e42:600::323) 56 data bytes
|
||||
64 bytes from 2a04:4e42:600::323: icmp_seq=1 ttl=59 time=27.9 ms
|
||||
64 bytes from 2a04:4e42:600::323: icmp_seq=2 ttl=59 time=28.4 ms
|
||||
64 bytes from 2a04:4e42:600::323: icmp_seq=3 ttl=59 time=36.0 ms
|
||||
64 bytes from 2a04:4e42:600::323: icmp_seq=4 ttl=59 time=28.5 ms
|
||||
64 bytes from 2a04:4e42:600::323: icmp_seq=5 ttl=59 time=35.8 ms
|
||||
64 bytes from 2a04:4e42:600::323: icmp_seq=6 ttl=59 time=34.4 ms
|
||||
64 bytes from 2a04:4e42:600::323: icmp_seq=7 ttl=59 time=30.7 ms
|
||||
64 bytes from 2a04:4e42:600::323: icmp_seq=8 ttl=59 time=28.5 ms
|
||||
64 bytes from 2a04:4e42:600::323: icmp_seq=9 ttl=59 time=36.5 ms
|
||||
64 bytes from 2a04:4e42:600::323: icmp_seq=10 ttl=59 time=36.3 ms
|
||||
no answer yet for icmp_seq=11
|
||||
64 bytes from 2a04:4e42:600::323: icmp_seq=12 ttl=59 time=37.4 ms
|
||||
64 bytes from 2a04:4e42:600::323: icmp_seq=13 ttl=59 time=30.7 ms
|
||||
64 bytes from 2a04:4e42:600::323: icmp_seq=14 ttl=59 time=36.5 ms
|
||||
64 bytes from 2a04:4e42:600::323: icmp_seq=15 ttl=59 time=35.4 ms
|
||||
64 bytes from 2a04:4e42:600::323: icmp_seq=16 ttl=59 time=36.3 ms
|
||||
64 bytes from 2a04:4e42:600::323: icmp_seq=17 ttl=59 time=37.5 ms
|
||||
64 bytes from 2a04:4e42:600::323: icmp_seq=18 ttl=59 time=36.2 ms
|
||||
64 bytes from 2a04:4e42:600::323: icmp_seq=19 ttl=59 time=27.0 ms
|
||||
64 bytes from 2a04:4e42:600::323: icmp_seq=20 ttl=59 time=38.1 ms
|
||||
|
||||
--- 2a04:4e42:600::323 ping statistics ---
|
||||
20 packets transmitted, 19 received, 5% packet loss, time 19067ms
|
||||
rtt min/avg/max/mdev = 27.064/33.626/38.146/3.803 ms
|
||||
1
tests/fixtures/centos-7.7/ping6-ip-dup.json
vendored
Normal file
1
tests/fixtures/centos-7.7/ping6-ip-dup.json
vendored
Normal file
@@ -0,0 +1 @@
|
||||
{"destination_ip": "ff02::1%ens33", "data_bytes": 56, "pattern": null, "destination": "ff02::1%ens33", "packets_transmitted": 5, "packets_received": 5, "packet_loss_percent": 0.0, "duplicates": 4, "time_ms": 4017.0, "round_trip_ms_min": 0.245, "round_trip_ms_avg": 4.726, "round_trip_ms_max": 12.568, "round_trip_ms_stddev": 5.395, "responses": [{"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "fe80::c48:5896:526d:81ba%ens33", "icmp_seq": 1, "ttl": 64, "time_ms": 0.245, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "fe80::feae:34ff:fea1:3a80%ens33", "icmp_seq": 1, "ttl": 64, "time_ms": 3.65, "duplicate": true}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "fe80::c48:5896:526d:81ba%ens33", "icmp_seq": 2, "ttl": 64, "time_ms": 0.329, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "fe80::feae:34ff:fea1:3a80%ens33", "icmp_seq": 2, "ttl": 64, "time_ms": 11.7, "duplicate": true}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "fe80::c48:5896:526d:81ba%ens33", "icmp_seq": 3, "ttl": 64, "time_ms": 0.592, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "fe80::feae:34ff:fea1:3a80%ens33", "icmp_seq": 3, "ttl": 64, "time_ms": 12.3, "duplicate": true}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "fe80::c48:5896:526d:81ba%ens33", "icmp_seq": 4, "ttl": 64, "time_ms": 0.51, "duplicate": false}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "fe80::feae:34ff:fea1:3a80%ens33", "icmp_seq": 4, "ttl": 64, "time_ms": 12.5, "duplicate": true}, {"type": "reply", "timestamp": null, "bytes": 64, "response_ip": "fe80::c48:5896:526d:81ba%ens33", "icmp_seq": 5, "ttl": 64, "time_ms": 0.538, "duplicate": false}]}
|
||||
14
tests/fixtures/centos-7.7/ping6-ip-dup.out
vendored
Normal file
14
tests/fixtures/centos-7.7/ping6-ip-dup.out
vendored
Normal file
@@ -0,0 +1,14 @@
|
||||
PING ff02::1%ens33(ff02::1%ens33) 56 data bytes
|
||||
64 bytes from fe80::c48:5896:526d:81ba%ens33: icmp_seq=1 ttl=64 time=0.245 ms
|
||||
64 bytes from fe80::feae:34ff:fea1:3a80%ens33: icmp_seq=1 ttl=64 time=3.65 ms (DUP!)
|
||||
64 bytes from fe80::c48:5896:526d:81ba%ens33: icmp_seq=2 ttl=64 time=0.329 ms
|
||||
64 bytes from fe80::feae:34ff:fea1:3a80%ens33: icmp_seq=2 ttl=64 time=11.7 ms (DUP!)
|
||||
64 bytes from fe80::c48:5896:526d:81ba%ens33: icmp_seq=3 ttl=64 time=0.592 ms
|
||||
64 bytes from fe80::feae:34ff:fea1:3a80%ens33: icmp_seq=3 ttl=64 time=12.3 ms (DUP!)
|
||||
64 bytes from fe80::c48:5896:526d:81ba%ens33: icmp_seq=4 ttl=64 time=0.510 ms
|
||||
64 bytes from fe80::feae:34ff:fea1:3a80%ens33: icmp_seq=4 ttl=64 time=12.5 ms (DUP!)
|
||||
64 bytes from fe80::c48:5896:526d:81ba%ens33: icmp_seq=5 ttl=64 time=0.538 ms
|
||||
|
||||
--- ff02::1%ens33 ping statistics ---
|
||||
5 packets transmitted, 5 received, +4 duplicates, 0% packet loss, time 4017ms
|
||||
rtt min/avg/max/mdev = 0.245/4.726/12.568/5.395 ms
|
||||
2
tests/fixtures/centos-7.7/ss-sudo-a.json
vendored
2
tests/fixtures/centos-7.7/ss-sudo-a.json
vendored
File diff suppressed because one or more lines are too long
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user