mirror of
https://github.com/kellyjonbrazil/jc.git
synced 2025-07-07 00:57:22 +02:00
15
CHANGELOG
15
CHANGELOG
@ -1,5 +1,18 @@
|
||||
jc changelog
|
||||
|
||||
20221216 v1.22.3
|
||||
- Add Common Log Format and Combined Log Format file parser (standard and streaming)
|
||||
- Add PostgreSQL password file parser
|
||||
- Add openvpn-status.log file parser
|
||||
- Add `cbt` command parser (Google Big Table)
|
||||
- Enhance `ifconfig` parser with interface lane information on BSD
|
||||
- Enhance `ifconfig` parser with additional IPv6 `scope_id` info for BSD
|
||||
- Fix `ifconfig` parser to capture some IPv6 addresses missed on BSD
|
||||
- Fix `git-log` and `git-log-s` parsers for failure on empty author name
|
||||
- Update `os-prober` parser with split EFI partition fields
|
||||
- Add ISO string attribute (`.iso`) to `jc.utils.timestamp()`
|
||||
- Fix several documentation typos
|
||||
|
||||
20221107 v1.22.2
|
||||
- add `sshd_conf` parser for `sshd` configuration files and `sshd -T` output
|
||||
- add `findmnt` command parser
|
||||
@ -581,7 +594,7 @@ jc changelog
|
||||
|
||||
20200211 v1.7.3
|
||||
- Add alternative 'magic' syntax: e.g. `jc ls -al`
|
||||
- Options can now be condensed (e.g. -prq is equivalant to -p -r -q)
|
||||
- Options can now be condensed (e.g. -prq is equivalent to -p -r -q)
|
||||
|
||||
20200208 v1.7.2
|
||||
- Include test fixtures in wheel and sdist
|
||||
|
@ -161,10 +161,13 @@ option.
|
||||
| ` --asciitable` | ASCII and Unicode table parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/asciitable) |
|
||||
| ` --asciitable-m` | multi-line ASCII and Unicode table parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/asciitable_m) |
|
||||
| ` --blkid` | `blkid` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/blkid) |
|
||||
| ` --cbt` | `cbt` (Google Bigtable) command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/cbt) |
|
||||
| ` --cef` | CEF string parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/cef) |
|
||||
| ` --cef-s` | CEF string streaming parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/cef_s) |
|
||||
| ` --chage` | `chage --list` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/chage) |
|
||||
| ` --cksum` | `cksum` and `sum` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/cksum) |
|
||||
| ` --clf` | Common and Combined Log Format file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/clf) |
|
||||
| ` --clf-s` | Common and Combined Log Format file streaming parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/clf_s) |
|
||||
| ` --crontab` | `crontab` command and file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/crontab) |
|
||||
| ` --crontab-u` | `crontab` file parser with user support | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/crontab_u) |
|
||||
| ` --csv` | CSV file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/csv) |
|
||||
@ -223,9 +226,11 @@ option.
|
||||
| ` --netstat` | `netstat` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/netstat) |
|
||||
| ` --nmcli` | `nmcli` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/nmcli) |
|
||||
| ` --ntpq` | `ntpq -p` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/ntpq) |
|
||||
| ` --openvpn` | openvpn-status.log file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/openvpn) |
|
||||
| ` --os-prober` | `os-prober` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/os_prober) |
|
||||
| ` --passwd` | `/etc/passwd` file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/passwd) |
|
||||
| ` --pci-ids` | `pci.ids` file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/pci_ids) |
|
||||
| ` --pgpass` | PostgreSQL password file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/pgpass) |
|
||||
| ` --pidstat` | `pidstat -H` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/pidstat) |
|
||||
| ` --pidstat-s` | `pidstat -H` command streaming parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/pidstat_s) |
|
||||
| ` --ping` | `ping` and `ping6` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/ping) |
|
||||
@ -390,7 +395,7 @@ option.
|
||||
### Streaming Parsers
|
||||
Most parsers load all of the data from `STDIN`, parse it, then output the entire
|
||||
JSON document serially. There are some streaming parsers (e.g. `ls-s` and
|
||||
`ping-s`) that immediately start processing and outputing the data line-by-line
|
||||
`ping-s`) that immediately start processing and outputting the data line-by-line
|
||||
as [JSON Lines](https://jsonlines.org/) (aka [NDJSON](http://ndjson.org/)) while
|
||||
it is being received from `STDIN`. This can significantly reduce the amount of
|
||||
memory required to parse large amounts of command output (e.g. `ls -lR /`) and
|
||||
|
@ -3,8 +3,8 @@ _jc()
|
||||
local cur prev words cword jc_commands jc_parsers jc_options \
|
||||
jc_about_options jc_about_mod_options jc_help_options jc_special_options
|
||||
|
||||
jc_commands=(acpi airport arp blkid chage cksum crontab date df dig dmidecode dpkg du env file findmnt finger free git gpg hciconfig id ifconfig iostat iptables iw jobs last lastb ls lsblk lsmod lsof lspci lsusb md5 md5sum mdadm mount mpstat netstat nmcli ntpq os-prober pidstat ping ping6 pip pip3 postconf printenv ps route rpm rsync sfdisk sha1sum sha224sum sha256sum sha384sum sha512sum shasum ss sshd stat sum sysctl systemctl systeminfo timedatectl top tracepath tracepath6 traceroute traceroute6 udevadm ufw uname update-alternatives upower uptime vdir vmstat w wc who xrandr zipinfo)
|
||||
jc_parsers=(--acpi --airport --airport-s --arp --asciitable --asciitable-m --blkid --cef --cef-s --chage --cksum --crontab --crontab-u --csv --csv-s --date --datetime-iso --df --dig --dir --dmidecode --dpkg-l --du --email-address --env --file --findmnt --finger --free --fstab --git-log --git-log-s --git-ls-remote --gpg --group --gshadow --hash --hashsum --hciconfig --history --hosts --id --ifconfig --ini --iostat --iostat-s --ip-address --iptables --iw-scan --jar-manifest --jobs --jwt --kv --last --ls --ls-s --lsblk --lsmod --lsof --lspci --lsusb --m3u --mdadm --mount --mpstat --mpstat-s --netstat --nmcli --ntpq --os-prober --passwd --pci-ids --pidstat --pidstat-s --ping --ping-s --pip-list --pip-show --plist --postconf --proc --proc-buddyinfo --proc-consoles --proc-cpuinfo --proc-crypto --proc-devices --proc-diskstats --proc-filesystems --proc-interrupts --proc-iomem --proc-ioports --proc-loadavg --proc-locks --proc-meminfo --proc-modules --proc-mtrr --proc-pagetypeinfo --proc-partitions --proc-slabinfo --proc-softirqs --proc-stat --proc-swaps --proc-uptime --proc-version --proc-vmallocinfo --proc-vmstat --proc-zoneinfo --proc-driver-rtc --proc-net-arp --proc-net-dev --proc-net-dev-mcast --proc-net-if-inet6 --proc-net-igmp --proc-net-igmp6 --proc-net-ipv6-route --proc-net-netlink --proc-net-netstat --proc-net-packet --proc-net-protocols --proc-net-route --proc-net-unix --proc-pid-fdinfo --proc-pid-io --proc-pid-maps --proc-pid-mountinfo --proc-pid-numa-maps --proc-pid-smaps --proc-pid-stat --proc-pid-statm --proc-pid-status --ps --route --rpm-qi --rsync --rsync-s --semver --sfdisk --shadow --ss --sshd-conf --stat --stat-s --sysctl --syslog --syslog-s --syslog-bsd --syslog-bsd-s --systemctl --systemctl-lj --systemctl-ls --systemctl-luf --systeminfo --time --timedatectl --timestamp --top --top-s --tracepath --traceroute --udevadm --ufw --ufw-appinfo --uname --update-alt-gs --update-alt-q --upower --uptime --url --vmstat --vmstat-s --w --wc --who --x509-cert --xml --xrandr --yaml --zipinfo)
|
||||
jc_commands=(acpi airport arp blkid cbt chage cksum crontab date df dig dmidecode dpkg du env file findmnt finger free git gpg hciconfig id ifconfig iostat iptables iw jobs last lastb ls lsblk lsmod lsof lspci lsusb md5 md5sum mdadm mount mpstat netstat nmcli ntpq os-prober pidstat ping ping6 pip pip3 postconf printenv ps route rpm rsync sfdisk sha1sum sha224sum sha256sum sha384sum sha512sum shasum ss sshd stat sum sysctl systemctl systeminfo timedatectl top tracepath tracepath6 traceroute traceroute6 udevadm ufw uname update-alternatives upower uptime vdir vmstat w wc who xrandr zipinfo)
|
||||
jc_parsers=(--acpi --airport --airport-s --arp --asciitable --asciitable-m --blkid --cbt --cef --cef-s --chage --cksum --clf --clf-s --crontab --crontab-u --csv --csv-s --date --datetime-iso --df --dig --dir --dmidecode --dpkg-l --du --email-address --env --file --findmnt --finger --free --fstab --git-log --git-log-s --git-ls-remote --gpg --group --gshadow --hash --hashsum --hciconfig --history --hosts --id --ifconfig --ini --iostat --iostat-s --ip-address --iptables --iw-scan --jar-manifest --jobs --jwt --kv --last --ls --ls-s --lsblk --lsmod --lsof --lspci --lsusb --m3u --mdadm --mount --mpstat --mpstat-s --netstat --nmcli --ntpq --openvpn --os-prober --passwd --pci-ids --pgpass --pidstat --pidstat-s --ping --ping-s --pip-list --pip-show --plist --postconf --proc --proc-buddyinfo --proc-consoles --proc-cpuinfo --proc-crypto --proc-devices --proc-diskstats --proc-filesystems --proc-interrupts --proc-iomem --proc-ioports --proc-loadavg --proc-locks --proc-meminfo --proc-modules --proc-mtrr --proc-pagetypeinfo --proc-partitions --proc-slabinfo --proc-softirqs --proc-stat --proc-swaps --proc-uptime --proc-version --proc-vmallocinfo --proc-vmstat --proc-zoneinfo --proc-driver-rtc --proc-net-arp --proc-net-dev --proc-net-dev-mcast --proc-net-if-inet6 --proc-net-igmp --proc-net-igmp6 --proc-net-ipv6-route --proc-net-netlink --proc-net-netstat --proc-net-packet --proc-net-protocols --proc-net-route --proc-net-unix --proc-pid-fdinfo --proc-pid-io --proc-pid-maps --proc-pid-mountinfo --proc-pid-numa-maps --proc-pid-smaps --proc-pid-stat --proc-pid-statm --proc-pid-status --ps --route --rpm-qi --rsync --rsync-s --semver --sfdisk --shadow --ss --sshd-conf --stat --stat-s --sysctl --syslog --syslog-s --syslog-bsd --syslog-bsd-s --systemctl --systemctl-lj --systemctl-ls --systemctl-luf --systeminfo --time --timedatectl --timestamp --top --top-s --tracepath --traceroute --udevadm --ufw --ufw-appinfo --uname --update-alt-gs --update-alt-q --upower --uptime --url --vmstat --vmstat-s --w --wc --who --x509-cert --xml --xrandr --yaml --zipinfo)
|
||||
jc_options=(--force-color -C --debug -d --monochrome -m --meta-out -M --pretty -p --quiet -q --raw -r --unbuffer -u --yaml-out -y)
|
||||
jc_about_options=(--about -a)
|
||||
jc_about_mod_options=(--pretty -p --yaml-out -y --monochrome -m --force-color -C)
|
||||
|
@ -9,12 +9,13 @@ _jc() {
|
||||
jc_help_options jc_help_options_describe \
|
||||
jc_special_options jc_special_options_describe
|
||||
|
||||
jc_commands=(acpi airport arp blkid chage cksum crontab date df dig dmidecode dpkg du env file findmnt finger free git gpg hciconfig id ifconfig iostat iptables iw jobs last lastb ls lsblk lsmod lsof lspci lsusb md5 md5sum mdadm mount mpstat netstat nmcli ntpq os-prober pidstat ping ping6 pip pip3 postconf printenv ps route rpm rsync sfdisk sha1sum sha224sum sha256sum sha384sum sha512sum shasum ss sshd stat sum sysctl systemctl systeminfo timedatectl top tracepath tracepath6 traceroute traceroute6 udevadm ufw uname update-alternatives upower uptime vdir vmstat w wc who xrandr zipinfo)
|
||||
jc_commands=(acpi airport arp blkid cbt chage cksum crontab date df dig dmidecode dpkg du env file findmnt finger free git gpg hciconfig id ifconfig iostat iptables iw jobs last lastb ls lsblk lsmod lsof lspci lsusb md5 md5sum mdadm mount mpstat netstat nmcli ntpq os-prober pidstat ping ping6 pip pip3 postconf printenv ps route rpm rsync sfdisk sha1sum sha224sum sha256sum sha384sum sha512sum shasum ss sshd stat sum sysctl systemctl systeminfo timedatectl top tracepath tracepath6 traceroute traceroute6 udevadm ufw uname update-alternatives upower uptime vdir vmstat w wc who xrandr zipinfo)
|
||||
jc_commands_describe=(
|
||||
'acpi:run "acpi" command with magic syntax.'
|
||||
'airport:run "airport" command with magic syntax.'
|
||||
'arp:run "arp" command with magic syntax.'
|
||||
'blkid:run "blkid" command with magic syntax.'
|
||||
'cbt:run "cbt" command with magic syntax.'
|
||||
'chage:run "chage" command with magic syntax.'
|
||||
'cksum:run "cksum" command with magic syntax.'
|
||||
'crontab:run "crontab" command with magic syntax.'
|
||||
@ -100,7 +101,7 @@ _jc() {
|
||||
'xrandr:run "xrandr" command with magic syntax.'
|
||||
'zipinfo:run "zipinfo" command with magic syntax.'
|
||||
)
|
||||
jc_parsers=(--acpi --airport --airport-s --arp --asciitable --asciitable-m --blkid --cef --cef-s --chage --cksum --crontab --crontab-u --csv --csv-s --date --datetime-iso --df --dig --dir --dmidecode --dpkg-l --du --email-address --env --file --findmnt --finger --free --fstab --git-log --git-log-s --git-ls-remote --gpg --group --gshadow --hash --hashsum --hciconfig --history --hosts --id --ifconfig --ini --iostat --iostat-s --ip-address --iptables --iw-scan --jar-manifest --jobs --jwt --kv --last --ls --ls-s --lsblk --lsmod --lsof --lspci --lsusb --m3u --mdadm --mount --mpstat --mpstat-s --netstat --nmcli --ntpq --os-prober --passwd --pci-ids --pidstat --pidstat-s --ping --ping-s --pip-list --pip-show --plist --postconf --proc --proc-buddyinfo --proc-consoles --proc-cpuinfo --proc-crypto --proc-devices --proc-diskstats --proc-filesystems --proc-interrupts --proc-iomem --proc-ioports --proc-loadavg --proc-locks --proc-meminfo --proc-modules --proc-mtrr --proc-pagetypeinfo --proc-partitions --proc-slabinfo --proc-softirqs --proc-stat --proc-swaps --proc-uptime --proc-version --proc-vmallocinfo --proc-vmstat --proc-zoneinfo --proc-driver-rtc --proc-net-arp --proc-net-dev --proc-net-dev-mcast --proc-net-if-inet6 --proc-net-igmp --proc-net-igmp6 --proc-net-ipv6-route --proc-net-netlink --proc-net-netstat --proc-net-packet --proc-net-protocols --proc-net-route --proc-net-unix --proc-pid-fdinfo --proc-pid-io --proc-pid-maps --proc-pid-mountinfo --proc-pid-numa-maps --proc-pid-smaps --proc-pid-stat --proc-pid-statm --proc-pid-status --ps --route --rpm-qi --rsync --rsync-s --semver --sfdisk --shadow --ss --sshd-conf --stat --stat-s --sysctl --syslog --syslog-s --syslog-bsd --syslog-bsd-s --systemctl --systemctl-lj --systemctl-ls --systemctl-luf --systeminfo --time --timedatectl --timestamp --top --top-s --tracepath --traceroute --udevadm --ufw --ufw-appinfo --uname --update-alt-gs --update-alt-q --upower --uptime --url --vmstat --vmstat-s --w --wc --who --x509-cert --xml --xrandr --yaml --zipinfo)
|
||||
jc_parsers=(--acpi --airport --airport-s --arp --asciitable --asciitable-m --blkid --cbt --cef --cef-s --chage --cksum --clf --clf-s --crontab --crontab-u --csv --csv-s --date --datetime-iso --df --dig --dir --dmidecode --dpkg-l --du --email-address --env --file --findmnt --finger --free --fstab --git-log --git-log-s --git-ls-remote --gpg --group --gshadow --hash --hashsum --hciconfig --history --hosts --id --ifconfig --ini --iostat --iostat-s --ip-address --iptables --iw-scan --jar-manifest --jobs --jwt --kv --last --ls --ls-s --lsblk --lsmod --lsof --lspci --lsusb --m3u --mdadm --mount --mpstat --mpstat-s --netstat --nmcli --ntpq --openvpn --os-prober --passwd --pci-ids --pgpass --pidstat --pidstat-s --ping --ping-s --pip-list --pip-show --plist --postconf --proc --proc-buddyinfo --proc-consoles --proc-cpuinfo --proc-crypto --proc-devices --proc-diskstats --proc-filesystems --proc-interrupts --proc-iomem --proc-ioports --proc-loadavg --proc-locks --proc-meminfo --proc-modules --proc-mtrr --proc-pagetypeinfo --proc-partitions --proc-slabinfo --proc-softirqs --proc-stat --proc-swaps --proc-uptime --proc-version --proc-vmallocinfo --proc-vmstat --proc-zoneinfo --proc-driver-rtc --proc-net-arp --proc-net-dev --proc-net-dev-mcast --proc-net-if-inet6 --proc-net-igmp --proc-net-igmp6 --proc-net-ipv6-route --proc-net-netlink --proc-net-netstat --proc-net-packet --proc-net-protocols --proc-net-route --proc-net-unix --proc-pid-fdinfo --proc-pid-io --proc-pid-maps --proc-pid-mountinfo --proc-pid-numa-maps --proc-pid-smaps --proc-pid-stat --proc-pid-statm --proc-pid-status --ps --route --rpm-qi --rsync --rsync-s --semver --sfdisk --shadow --ss --sshd-conf --stat --stat-s --sysctl --syslog --syslog-s --syslog-bsd --syslog-bsd-s --systemctl --systemctl-lj --systemctl-ls --systemctl-luf --systeminfo --time --timedatectl --timestamp --top --top-s --tracepath --traceroute --udevadm --ufw --ufw-appinfo --uname --update-alt-gs --update-alt-q --upower --uptime --url --vmstat --vmstat-s --w --wc --who --x509-cert --xml --xrandr --yaml --zipinfo)
|
||||
jc_parsers_describe=(
|
||||
'--acpi:`acpi` command parser'
|
||||
'--airport:`airport -I` command parser'
|
||||
@ -109,10 +110,13 @@ _jc() {
|
||||
'--asciitable:ASCII and Unicode table parser'
|
||||
'--asciitable-m:multi-line ASCII and Unicode table parser'
|
||||
'--blkid:`blkid` command parser'
|
||||
'--cbt:`cbt` (Google Bigtable) command parser'
|
||||
'--cef:CEF string parser'
|
||||
'--cef-s:CEF string streaming parser'
|
||||
'--chage:`chage --list` command parser'
|
||||
'--cksum:`cksum` and `sum` command parser'
|
||||
'--clf:Common and Combined Log Format file parser'
|
||||
'--clf-s:Common and Combined Log Format file streaming parser'
|
||||
'--crontab:`crontab` command and file parser'
|
||||
'--crontab-u:`crontab` file parser with user support'
|
||||
'--csv:CSV file parser'
|
||||
@ -171,9 +175,11 @@ _jc() {
|
||||
'--netstat:`netstat` command parser'
|
||||
'--nmcli:`nmcli` command parser'
|
||||
'--ntpq:`ntpq -p` command parser'
|
||||
'--openvpn:openvpn-status.log file parser'
|
||||
'--os-prober:`os-prober` command parser'
|
||||
'--passwd:`/etc/passwd` file parser'
|
||||
'--pci-ids:`pci.ids` file parser'
|
||||
'--pgpass:PostgreSQL password file parser'
|
||||
'--pidstat:`pidstat -H` command parser'
|
||||
'--pidstat-s:`pidstat -H` command streaming parser'
|
||||
'--ping:`ping` and `ping6` command parser'
|
||||
|
125
docs/parsers/cbt.md
Normal file
125
docs/parsers/cbt.md
Normal file
@ -0,0 +1,125 @@
|
||||
[Home](https://kellyjonbrazil.github.io/jc/)
|
||||
<a id="jc.parsers.cbt"></a>
|
||||
|
||||
# jc.parsers.cbt
|
||||
|
||||
jc - JSON Convert `cbt` command output parser (Google Bigtable)
|
||||
|
||||
Parses the human-, but not machine-, friendly output of the cbt command (for
|
||||
Google's Bigtable).
|
||||
|
||||
No effort is made to convert the data types of the values in the cells.
|
||||
|
||||
The `timestamp_epoch` calculated timestamp field is naive. (i.e. based on
|
||||
the local time of the system the parser is run on)
|
||||
|
||||
The `timestamp_epoch_utc` calculated timestamp field is timezone-aware and
|
||||
is only available if the timestamp has a UTC timezone.
|
||||
|
||||
The `timestamp_iso` calculated timestamp field will only include UTC
|
||||
timezone information if the timestamp has a UTC timezone.
|
||||
|
||||
Raw output contains all cells for each column (including timestamps), while
|
||||
the normal output contains only the latest value for each column.
|
||||
|
||||
Usage (cli):
|
||||
|
||||
$ cbt | jc --cbt
|
||||
|
||||
or
|
||||
|
||||
$ jc cbt
|
||||
|
||||
Usage (module):
|
||||
|
||||
import jc
|
||||
result = jc.parse('cbt', cbt_command_output)
|
||||
|
||||
Schema:
|
||||
|
||||
[
|
||||
{
|
||||
"key": string,
|
||||
"cells": {
|
||||
<string>: { # column family
|
||||
<string>: string # column: value
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
Schema (raw):
|
||||
|
||||
[
|
||||
{
|
||||
"key": string,
|
||||
"cells": [
|
||||
{
|
||||
"column_family": string,
|
||||
"column": string,
|
||||
"value": string,
|
||||
"timestamp_iso": string,
|
||||
"timestamp_epoch": integer,
|
||||
"timestamp_epoch_utc": integer
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
Examples:
|
||||
|
||||
$ cbt -project=$PROJECT -instance=$INSTANCE lookup $TABLE foo | jc --cbt -p
|
||||
[
|
||||
{
|
||||
"key": "foo",
|
||||
"cells": {
|
||||
"foo": {
|
||||
"bar": "baz"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
$ cbt -project=$PROJECT -instance=$INSTANCE lookup $TABLE foo | jc --cbt -p -r
|
||||
[
|
||||
{
|
||||
"key": "foo",
|
||||
"cells": [
|
||||
{
|
||||
"column_family": "foo",
|
||||
"column": "bar",
|
||||
"value": "baz1",
|
||||
"timestamp_iso": "1970-01-01T01:00:00",
|
||||
"timestamp_epoch": 32400,
|
||||
"timestamp_epoch_utc": null
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
<a id="jc.parsers.cbt.parse"></a>
|
||||
|
||||
### parse
|
||||
|
||||
```python
|
||||
def parse(data: str,
|
||||
raw: bool = False,
|
||||
quiet: bool = False) -> List[JSONDictType]
|
||||
```
|
||||
|
||||
Main text parsing function
|
||||
|
||||
Parameters:
|
||||
|
||||
data: (string) text data to parse
|
||||
raw: (boolean) unprocessed output if True
|
||||
quiet: (boolean) suppress warning messages if True
|
||||
|
||||
Returns:
|
||||
|
||||
List of Dictionaries. Raw or processed structured data.
|
||||
|
||||
### Parser Information
|
||||
Compatibility: linux, darwin, cygwin, win32, aix, freebsd
|
||||
|
||||
Version 1.0 by Andreas Weiden (andreas.weiden@gmail.com)
|
199
docs/parsers/clf.md
Normal file
199
docs/parsers/clf.md
Normal file
@ -0,0 +1,199 @@
|
||||
[Home](https://kellyjonbrazil.github.io/jc/)
|
||||
<a id="jc.parsers.clf"></a>
|
||||
|
||||
# jc.parsers.clf
|
||||
|
||||
jc - JSON Convert Common Log Format file parser
|
||||
|
||||
This parser will handle the Common Log Format standard as specified at
|
||||
https://www.w3.org/Daemon/User/Config/Logging.html#common-logfile-format.
|
||||
|
||||
Combined Log Format is also supported. (Referer and User Agent fields added)
|
||||
|
||||
Extra fields may be present and will be enclosed in the `extra` field as
|
||||
a single string.
|
||||
|
||||
If a log line cannot be parsed, an object with an `unparsable` field will
|
||||
be present with a value of the original line.
|
||||
|
||||
The `epoch` calculated timestamp field is naive. (i.e. based on the
|
||||
local time of the system the parser is run on)
|
||||
|
||||
The `epoch_utc` calculated timestamp field is timezone-aware and is
|
||||
only available if the timezone field is UTC.
|
||||
|
||||
Usage (cli):
|
||||
|
||||
$ cat file.log | jc --clf
|
||||
|
||||
Usage (module):
|
||||
|
||||
import jc
|
||||
result = jc.parse('clf', common_log_file_output)
|
||||
|
||||
Schema:
|
||||
|
||||
Empty strings and `-` values are converted to `null`/`None`.
|
||||
|
||||
[
|
||||
{
|
||||
"host": string,
|
||||
"ident": string,
|
||||
"authuser": string,
|
||||
"date": string,
|
||||
"day": integer,
|
||||
"month": string,
|
||||
"year": integer,
|
||||
"hour": integer,
|
||||
"minute": integer,
|
||||
"second": integer,
|
||||
"tz": string,
|
||||
"request": string,
|
||||
"request_method": string,
|
||||
"request_url": string,
|
||||
"request_version": string,
|
||||
"status": integer,
|
||||
"bytes": integer,
|
||||
"referer": string,
|
||||
"user_agent": string,
|
||||
"extra": string,
|
||||
"epoch": integer, # [0]
|
||||
"epoch_utc": integer, # [1]
|
||||
"unparsable": string # [2]
|
||||
}
|
||||
]
|
||||
|
||||
[0] naive timestamp
|
||||
[1] timezone-aware timestamp. Only available if timezone field is UTC
|
||||
[2] exists if the line was not able to be parsed
|
||||
|
||||
Examples:
|
||||
|
||||
$ cat file.log | jc --clf -p
|
||||
[
|
||||
{
|
||||
"host": "127.0.0.1",
|
||||
"ident": "user-identifier",
|
||||
"authuser": "frank",
|
||||
"date": "10/Oct/2000:13:55:36 -0700",
|
||||
"day": 10,
|
||||
"month": "Oct",
|
||||
"year": 2000,
|
||||
"hour": 13,
|
||||
"minute": 55,
|
||||
"second": 36,
|
||||
"tz": "-0700",
|
||||
"request": "GET /apache_pb.gif HTTPS/1.0",
|
||||
"status": 200,
|
||||
"bytes": 2326,
|
||||
"referer": null,
|
||||
"user_agent": null,
|
||||
"extra": null,
|
||||
"request_method": "GET",
|
||||
"request_url": "/apache_pb.gif",
|
||||
"request_version": "HTTPS/1.0",
|
||||
"epoch": 971211336,
|
||||
"epoch_utc": null
|
||||
},
|
||||
{
|
||||
"host": "1.1.1.2",
|
||||
"ident": null,
|
||||
"authuser": null,
|
||||
"date": "11/Nov/2016:03:04:55 +0100",
|
||||
"day": 11,
|
||||
"month": "Nov",
|
||||
"year": 2016,
|
||||
"hour": 3,
|
||||
"minute": 4,
|
||||
"second": 55,
|
||||
"tz": "+0100",
|
||||
"request": "GET /",
|
||||
"status": 200,
|
||||
"bytes": 83,
|
||||
"referer": null,
|
||||
"user_agent": null,
|
||||
"extra": "- 9221 1.1.1.1",
|
||||
"request_method": "GET",
|
||||
"request_url": "/",
|
||||
"request_version": null,
|
||||
"epoch": 1478862295,
|
||||
"epoch_utc": null
|
||||
},
|
||||
...
|
||||
]
|
||||
|
||||
$ cat file.log | jc --clf -p -r
|
||||
[
|
||||
{
|
||||
"host": "127.0.0.1",
|
||||
"ident": "user-identifier",
|
||||
"authuser": "frank",
|
||||
"date": "10/Oct/2000:13:55:36 -0700",
|
||||
"day": "10",
|
||||
"month": "Oct",
|
||||
"year": "2000",
|
||||
"hour": "13",
|
||||
"minute": "55",
|
||||
"second": "36",
|
||||
"tz": "-0700",
|
||||
"request": "GET /apache_pb.gif HTTPS/1.0",
|
||||
"status": "200",
|
||||
"bytes": "2326",
|
||||
"referer": null,
|
||||
"user_agent": null,
|
||||
"extra": "",
|
||||
"request_method": "GET",
|
||||
"request_url": "/apache_pb.gif",
|
||||
"request_version": "HTTPS/1.0"
|
||||
},
|
||||
{
|
||||
"host": "1.1.1.2",
|
||||
"ident": "-",
|
||||
"authuser": "-",
|
||||
"date": "11/Nov/2016:03:04:55 +0100",
|
||||
"day": "11",
|
||||
"month": "Nov",
|
||||
"year": "2016",
|
||||
"hour": "03",
|
||||
"minute": "04",
|
||||
"second": "55",
|
||||
"tz": "+0100",
|
||||
"request": "GET /",
|
||||
"status": "200",
|
||||
"bytes": "83",
|
||||
"referer": "-",
|
||||
"user_agent": "-",
|
||||
"extra": "- 9221 1.1.1.1",
|
||||
"request_method": "GET",
|
||||
"request_url": "/",
|
||||
"request_version": null
|
||||
},
|
||||
...
|
||||
]
|
||||
|
||||
<a id="jc.parsers.clf.parse"></a>
|
||||
|
||||
### parse
|
||||
|
||||
```python
|
||||
def parse(data: str,
|
||||
raw: bool = False,
|
||||
quiet: bool = False) -> List[JSONDictType]
|
||||
```
|
||||
|
||||
Main text parsing function
|
||||
|
||||
Parameters:
|
||||
|
||||
data: (string) text data to parse
|
||||
raw: (boolean) unprocessed output if True
|
||||
quiet: (boolean) suppress warning messages if True
|
||||
|
||||
Returns:
|
||||
|
||||
List of Dictionaries. Raw or processed structured data.
|
||||
|
||||
### Parser Information
|
||||
Compatibility: linux, darwin, cygwin, win32, aix, freebsd
|
||||
|
||||
Version 1.0 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
117
docs/parsers/clf_s.md
Normal file
117
docs/parsers/clf_s.md
Normal file
@ -0,0 +1,117 @@
|
||||
[Home](https://kellyjonbrazil.github.io/jc/)
|
||||
<a id="jc.parsers.clf_s"></a>
|
||||
|
||||
# jc.parsers.clf\_s
|
||||
|
||||
jc - JSON Convert Common Log Format file streaming parser
|
||||
|
||||
> This streaming parser outputs JSON Lines (cli) or returns an Iterable of
|
||||
> Dictionaries (module)
|
||||
|
||||
This parser will handle the Common Log Format standard as specified at
|
||||
https://www.w3.org/Daemon/User/Config/Logging.html#common-logfile-format.
|
||||
|
||||
Combined Log Format is also supported. (Referer and User Agent fields added)
|
||||
|
||||
Extra fields may be present and will be enclosed in the `extra` field as
|
||||
a single string.
|
||||
|
||||
If a log line cannot be parsed, an object with an `unparsable` field will
|
||||
be present with a value of the original line.
|
||||
|
||||
The `epoch` calculated timestamp field is naive. (i.e. based on the
|
||||
local time of the system the parser is run on)
|
||||
|
||||
The `epoch_utc` calculated timestamp field is timezone-aware and is
|
||||
only available if the timezone field is UTC.
|
||||
|
||||
Usage (cli):
|
||||
|
||||
$ cat file.log | jc --clf-s
|
||||
|
||||
Usage (module):
|
||||
|
||||
import jc
|
||||
|
||||
result = jc.parse('clf_s', common_log_file_output.splitlines())
|
||||
for item in result:
|
||||
# do something
|
||||
|
||||
Schema:
|
||||
|
||||
Empty strings and `-` values are converted to `null`/`None`.
|
||||
|
||||
{
|
||||
"host": string,
|
||||
"ident": string,
|
||||
"authuser": string,
|
||||
"date": string,
|
||||
"day": integer,
|
||||
"month": string,
|
||||
"year": integer,
|
||||
"hour": integer,
|
||||
"minute": integer,
|
||||
"second": integer,
|
||||
"tz": string,
|
||||
"request": string,
|
||||
"request_method": string,
|
||||
"request_url": string,
|
||||
"request_version": string,
|
||||
"status": integer,
|
||||
"bytes": integer,
|
||||
"referer": string,
|
||||
"user_agent": string,
|
||||
"extra": string,
|
||||
"epoch": integer, # [0]
|
||||
"epoch_utc": integer, # [1]
|
||||
"unparsable": string # [2]
|
||||
}
|
||||
|
||||
[0] naive timestamp
|
||||
[1] timezone-aware timestamp. Only available if timezone field is UTC
|
||||
[2] exists if the line was not able to be parsed
|
||||
|
||||
Examples:
|
||||
|
||||
$ cat file.log | jc --clf-s
|
||||
{"host":"127.0.0.1","ident":"user-identifier","authuser":"frank","...}
|
||||
{"host":"1.1.1.2","ident":null,"authuser":null,"date":"11/Nov/2016...}
|
||||
...
|
||||
|
||||
$ cat file.log | jc --clf-s -r
|
||||
{"host":"127.0.0.1","ident":"user-identifier","authuser":"frank","...}
|
||||
{"host":"1.1.1.2","ident":"-","authuser":"-","date":"11/Nov/2016:0...}
|
||||
...
|
||||
|
||||
<a id="jc.parsers.clf_s.parse"></a>
|
||||
|
||||
### parse
|
||||
|
||||
```python
|
||||
@add_jc_meta
|
||||
def parse(data: Iterable[str],
|
||||
raw: bool = False,
|
||||
quiet: bool = False,
|
||||
ignore_exceptions: bool = False) -> StreamingOutputType
|
||||
```
|
||||
|
||||
Main text parsing generator function. Returns an iterable object.
|
||||
|
||||
Parameters:
|
||||
|
||||
data: (iterable) line-based text data to parse
|
||||
(e.g. sys.stdin or str.splitlines())
|
||||
|
||||
raw: (boolean) unprocessed output if True
|
||||
quiet: (boolean) suppress warning messages if True
|
||||
ignore_exceptions: (boolean) ignore parsing exceptions if True
|
||||
|
||||
|
||||
Returns:
|
||||
|
||||
Iterable of Dictionaries
|
||||
|
||||
### Parser Information
|
||||
Compatibility: linux, darwin, cygwin, win32, aix, freebsd
|
||||
|
||||
Version 1.0 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
@ -6,7 +6,7 @@
|
||||
jc - JSON Convert ISO 8601 Datetime string parser
|
||||
|
||||
This parser supports standard ISO 8601 strings that include both date and
|
||||
time. If no timezone or offset information is available in the sring, then
|
||||
time. If no timezone or offset information is available in the string, then
|
||||
UTC timezone is used.
|
||||
|
||||
Usage (cli):
|
||||
|
@ -106,7 +106,7 @@ Schema:
|
||||
]
|
||||
|
||||
[0] naive timestamp if "when" field is parsable, else null
|
||||
[1] timezone aware timestamp availabe for UTC, else null
|
||||
[1] timezone aware timestamp available for UTC, else null
|
||||
|
||||
Examples:
|
||||
|
||||
|
@ -72,7 +72,6 @@ Examples:
|
||||
...
|
||||
]
|
||||
|
||||
|
||||
$ findmnt | jc --findmnt -p -r
|
||||
[
|
||||
{
|
||||
|
@ -40,13 +40,13 @@ Schema:
|
||||
[
|
||||
{
|
||||
"commit": string,
|
||||
"author": string,
|
||||
"author_email": string,
|
||||
"author": string/null,
|
||||
"author_email": string/null,
|
||||
"date": string,
|
||||
"epoch": integer, # [0]
|
||||
"epoch_utc": integer, # [1]
|
||||
"commit_by": string,
|
||||
"commit_by_email": string,
|
||||
"commit_by": string/null,
|
||||
"commit_by_email": string/null,
|
||||
"commit_by_date": string,
|
||||
"message": string,
|
||||
"stats" : {
|
||||
@ -61,7 +61,7 @@ Schema:
|
||||
]
|
||||
|
||||
[0] naive timestamp if "date" field is parsable, else null
|
||||
[1] timezone aware timestamp availabe for UTC, else null
|
||||
[1] timezone aware timestamp available for UTC, else null
|
||||
|
||||
Examples:
|
||||
|
||||
@ -172,4 +172,4 @@ Returns:
|
||||
### Parser Information
|
||||
Compatibility: linux, darwin, cygwin, win32, aix, freebsd
|
||||
|
||||
Version 1.2 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
||||
Version 1.3 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
||||
|
@ -41,13 +41,13 @@ Schema:
|
||||
|
||||
{
|
||||
"commit": string,
|
||||
"author": string,
|
||||
"author_email": string,
|
||||
"author": string/null,
|
||||
"author_email": string/null,
|
||||
"date": string,
|
||||
"epoch": integer, # [0]
|
||||
"epoch_utc": integer, # [1]
|
||||
"commit_by": string,
|
||||
"commit_by_email": string,
|
||||
"commit_by": string/null,
|
||||
"commit_by_email": string/null,
|
||||
"commit_by_date": string,
|
||||
"message": string,
|
||||
"stats" : {
|
||||
@ -68,7 +68,7 @@ Schema:
|
||||
}
|
||||
|
||||
[0] naive timestamp if "date" field is parsable, else null
|
||||
[1] timezone aware timestamp availabe for UTC, else null
|
||||
[1] timezone aware timestamp available for UTC, else null
|
||||
|
||||
Examples:
|
||||
|
||||
@ -108,4 +108,4 @@ Returns:
|
||||
### Parser Information
|
||||
Compatibility: linux, darwin, cygwin, win32, aix, freebsd
|
||||
|
||||
Version 1.2 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
||||
Version 1.3 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
||||
|
@ -71,7 +71,7 @@ Examples:
|
||||
```python
|
||||
def parse(data: str,
|
||||
raw: bool = False,
|
||||
quiet: bool = False) -> List[JSONDictType]
|
||||
quiet: bool = False) -> Union[JSONDictType, List[JSONDictType]]
|
||||
```
|
||||
|
||||
Main text parsing function
|
||||
|
@ -3,7 +3,7 @@
|
||||
|
||||
# jc.parsers.ifconfig
|
||||
|
||||
jc - JSON Convert `foo` command output parser
|
||||
jc - JSON Convert `ifconfig` command output parser
|
||||
|
||||
No `ifconfig` options are supported.
|
||||
|
||||
@ -42,6 +42,7 @@ Schema:
|
||||
"ipv6_addr": string, # [0]
|
||||
"ipv6_mask": integer, # [0]
|
||||
"ipv6_scope": string, # [0]
|
||||
"ipv6_scope_id": string, # [0]
|
||||
"ipv6_type": string, # [0]
|
||||
"rx_packets": integer,
|
||||
"rx_bytes": integer,
|
||||
@ -87,10 +88,19 @@ Schema:
|
||||
"ipv6: [
|
||||
{
|
||||
"address": string,
|
||||
"scope_id": string,
|
||||
"mask": integer,
|
||||
"scope": string,
|
||||
"type": string
|
||||
}
|
||||
],
|
||||
"lanes": [
|
||||
{
|
||||
"lane": integer,
|
||||
"rx_power_mw": float,
|
||||
"rx_power_dbm": float,
|
||||
"tx_bias_ma": float
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
@ -147,6 +157,7 @@ Examples:
|
||||
"ipv6": [
|
||||
{
|
||||
"address": "fe80::c1cb:715d:bc3e:b8a0",
|
||||
"scope_id": null,
|
||||
"mask": 64,
|
||||
"scope": "0x20",
|
||||
"type": "link"
|
||||
@ -195,6 +206,7 @@ Examples:
|
||||
"ipv6": [
|
||||
{
|
||||
"address": "fe80::c1cb:715d:bc3e:b8a0",
|
||||
"scope_id": null,
|
||||
"mask": "64",
|
||||
"scope": "0x20",
|
||||
"type": "link"
|
||||
@ -228,4 +240,4 @@ Returns:
|
||||
### Parser Information
|
||||
Compatibility: linux, aix, freebsd, darwin
|
||||
|
||||
Version 2.0 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
||||
Version 2.1 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
||||
|
@ -130,4 +130,4 @@ Returns:
|
||||
### Parser Information
|
||||
Compatibility: linux
|
||||
|
||||
Version 1.2 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
||||
Version 1.3 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
||||
|
@ -99,4 +99,4 @@ Returns:
|
||||
### Parser Information
|
||||
Compatibility: linux, darwin, cygwin, aix, freebsd
|
||||
|
||||
Version 1.1 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
||||
Version 1.2 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
||||
|
178
docs/parsers/openvpn.md
Normal file
178
docs/parsers/openvpn.md
Normal file
@ -0,0 +1,178 @@
|
||||
[Home](https://kellyjonbrazil.github.io/jc/)
|
||||
<a id="jc.parsers.openvpn"></a>
|
||||
|
||||
# jc.parsers.openvpn
|
||||
|
||||
jc - JSON Convert openvpn-status.log file parser
|
||||
|
||||
The `*_epoch` calculated timestamp fields are naive. (i.e. based on
|
||||
the local time of the system the parser is run on)
|
||||
|
||||
Usage (cli):
|
||||
|
||||
$ cat openvpn-status.log | jc --openvpn
|
||||
|
||||
Usage (module):
|
||||
|
||||
import jc
|
||||
result = jc.parse('openvpn', openvpn_status_log_file_output)
|
||||
|
||||
Schema:
|
||||
|
||||
{
|
||||
"clients": [
|
||||
{
|
||||
"common_name": string,
|
||||
"real_address": string,
|
||||
"real_address_prefix": integer, # [0]
|
||||
"real_address_port": integer, # [0]
|
||||
"bytes_received": integer,
|
||||
"bytes_sent": integer,
|
||||
"connected_since": string,
|
||||
"connected_since_epoch": integer,
|
||||
"updated": string,
|
||||
"updated_epoch": integer,
|
||||
}
|
||||
],
|
||||
"routing_table": [
|
||||
{
|
||||
"virtual_address": string,
|
||||
"virtual_address_prefix": integer, # [0]
|
||||
"virtual_address_port": integer, # [0]
|
||||
"common_name": string,
|
||||
"real_address": string,
|
||||
"real_address_prefix": integer, # [0]
|
||||
"real_address_port": integer, # [0]
|
||||
"last_reference": string,
|
||||
"last_reference_epoch": integer,
|
||||
}
|
||||
],
|
||||
"global_stats": {
|
||||
"max_bcast_mcast_queue_len": integer
|
||||
}
|
||||
}
|
||||
|
||||
[0] null/None if not found
|
||||
|
||||
Examples:
|
||||
|
||||
$ cat openvpn-status.log | jc --openvpn -p
|
||||
{
|
||||
"clients": [
|
||||
{
|
||||
"common_name": "foo@example.com",
|
||||
"real_address": "10.10.10.10",
|
||||
"bytes_received": 334948,
|
||||
"bytes_sent": 1973012,
|
||||
"connected_since": "Thu Jun 18 04:23:03 2015",
|
||||
"updated": "Thu Jun 18 08:12:15 2015",
|
||||
"real_address_prefix": null,
|
||||
"real_address_port": 49502,
|
||||
"connected_since_epoch": 1434626583,
|
||||
"updated_epoch": 1434640335
|
||||
},
|
||||
{
|
||||
"common_name": "foo@example.com",
|
||||
"real_address": "10.10.10.10",
|
||||
"bytes_received": 334948,
|
||||
"bytes_sent": 1973012,
|
||||
"connected_since": "Thu Jun 18 04:23:03 2015",
|
||||
"updated": "Thu Jun 18 08:12:15 2015",
|
||||
"real_address_prefix": null,
|
||||
"real_address_port": 49503,
|
||||
"connected_since_epoch": 1434626583,
|
||||
"updated_epoch": 1434640335
|
||||
}
|
||||
],
|
||||
"routing_table": [
|
||||
{
|
||||
"virtual_address": "192.168.255.118",
|
||||
"common_name": "baz@example.com",
|
||||
"real_address": "10.10.10.10",
|
||||
"last_reference": "Thu Jun 18 08:12:09 2015",
|
||||
"virtual_address_prefix": null,
|
||||
"virtual_address_port": null,
|
||||
"real_address_prefix": null,
|
||||
"real_address_port": 63414,
|
||||
"last_reference_epoch": 1434640329
|
||||
},
|
||||
{
|
||||
"virtual_address": "10.200.0.0",
|
||||
"common_name": "baz@example.com",
|
||||
"real_address": "10.10.10.10",
|
||||
"last_reference": "Thu Jun 18 08:12:09 2015",
|
||||
"virtual_address_prefix": 16,
|
||||
"virtual_address_port": null,
|
||||
"real_address_prefix": null,
|
||||
"real_address_port": 63414,
|
||||
"last_reference_epoch": 1434640329
|
||||
}
|
||||
],
|
||||
"global_stats": {
|
||||
"max_bcast_mcast_queue_len": 0
|
||||
}
|
||||
}
|
||||
|
||||
$ cat openvpn-status.log | jc --openvpn -p -r
|
||||
{
|
||||
"clients": [
|
||||
{
|
||||
"common_name": "foo@example.com",
|
||||
"real_address": "10.10.10.10:49502",
|
||||
"bytes_received": "334948",
|
||||
"bytes_sent": "1973012",
|
||||
"connected_since": "Thu Jun 18 04:23:03 2015",
|
||||
"updated": "Thu Jun 18 08:12:15 2015"
|
||||
},
|
||||
{
|
||||
"common_name": "foo@example.com",
|
||||
"real_address": "10.10.10.10:49503",
|
||||
"bytes_received": "334948",
|
||||
"bytes_sent": "1973012",
|
||||
"connected_since": "Thu Jun 18 04:23:03 2015",
|
||||
"updated": "Thu Jun 18 08:12:15 2015"
|
||||
}
|
||||
],
|
||||
"routing_table": [
|
||||
{
|
||||
"virtual_address": "192.168.255.118",
|
||||
"common_name": "baz@example.com",
|
||||
"real_address": "10.10.10.10:63414",
|
||||
"last_reference": "Thu Jun 18 08:12:09 2015"
|
||||
},
|
||||
{
|
||||
"virtual_address": "10.200.0.0/16",
|
||||
"common_name": "baz@example.com",
|
||||
"real_address": "10.10.10.10:63414",
|
||||
"last_reference": "Thu Jun 18 08:12:09 2015"
|
||||
}
|
||||
],
|
||||
"global_stats": {
|
||||
"max_bcast_mcast_queue_len": "0"
|
||||
}
|
||||
}
|
||||
|
||||
<a id="jc.parsers.openvpn.parse"></a>
|
||||
|
||||
### parse
|
||||
|
||||
```python
|
||||
def parse(data: str, raw: bool = False, quiet: bool = False) -> JSONDictType
|
||||
```
|
||||
|
||||
Main text parsing function
|
||||
|
||||
Parameters:
|
||||
|
||||
data: (string) text data to parse
|
||||
raw: (boolean) unprocessed output if True
|
||||
quiet: (boolean) suppress warning messages if True
|
||||
|
||||
Returns:
|
||||
|
||||
Dictionary. Raw or processed structured data.
|
||||
|
||||
### Parser Information
|
||||
Compatibility: linux, darwin, cygwin, win32, aix, freebsd
|
||||
|
||||
Version 1.0 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
@ -21,12 +21,15 @@ Usage (module):
|
||||
Schema:
|
||||
|
||||
{
|
||||
'partition': string,
|
||||
'name': string,
|
||||
'short_name': string,
|
||||
'type': string
|
||||
"partition": string,
|
||||
"efi_bootmgr": string, # [0]
|
||||
"name": string,
|
||||
"short_name": string,
|
||||
"type": string
|
||||
}
|
||||
|
||||
[0] only exists if an EFI boot manager is detected
|
||||
|
||||
Examples:
|
||||
|
||||
$ os-prober | jc --os-prober -p
|
||||
@ -60,4 +63,4 @@ Returns:
|
||||
### Parser Information
|
||||
Compatibility: linux
|
||||
|
||||
Version 1.0 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
||||
Version 1.1 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
||||
|
75
docs/parsers/pgpass.md
Normal file
75
docs/parsers/pgpass.md
Normal file
@ -0,0 +1,75 @@
|
||||
[Home](https://kellyjonbrazil.github.io/jc/)
|
||||
<a id="jc.parsers.pgpass"></a>
|
||||
|
||||
# jc.parsers.pgpass
|
||||
|
||||
jc - JSON Convert PostgreSQL password file parser
|
||||
|
||||
Usage (cli):
|
||||
|
||||
$ cat /var/lib/postgresql/.pgpass | jc --pgpass
|
||||
|
||||
Usage (module):
|
||||
|
||||
import jc
|
||||
result = jc.parse('pgpass', postgres_password_file)
|
||||
|
||||
Schema:
|
||||
|
||||
[
|
||||
{
|
||||
"hostname": string,
|
||||
"port": string,
|
||||
"database": string,
|
||||
"username": string,
|
||||
"password": string
|
||||
}
|
||||
]
|
||||
|
||||
Examples:
|
||||
|
||||
$ cat /var/lib/postgresql/.pgpass | jc --pgpass -p
|
||||
[
|
||||
{
|
||||
"hostname": "dbserver",
|
||||
"port": "*",
|
||||
"database": "db1",
|
||||
"username": "dbuser",
|
||||
"password": "pwd123"
|
||||
},
|
||||
{
|
||||
"hostname": "dbserver2",
|
||||
"port": "8888",
|
||||
"database": "inventory",
|
||||
"username": "joe:user",
|
||||
"password": "abc123"
|
||||
},
|
||||
...
|
||||
]
|
||||
|
||||
<a id="jc.parsers.pgpass.parse"></a>
|
||||
|
||||
### parse
|
||||
|
||||
```python
|
||||
def parse(data: str,
|
||||
raw: bool = False,
|
||||
quiet: bool = False) -> List[JSONDictType]
|
||||
```
|
||||
|
||||
Main text parsing function
|
||||
|
||||
Parameters:
|
||||
|
||||
data: (string) text data to parse
|
||||
raw: (boolean) unprocessed output if True
|
||||
quiet: (boolean) suppress warning messages if True
|
||||
|
||||
Returns:
|
||||
|
||||
List of Dictionaries. Raw or processed structured data.
|
||||
|
||||
### Parser Information
|
||||
Compatibility: linux, darwin, cygwin, win32, aix, freebsd
|
||||
|
||||
Version 1.0 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
@ -106,4 +106,4 @@ Returns:
|
||||
### Parser Information
|
||||
Compatibility: linux, darwin, freebsd
|
||||
|
||||
Version 1.1 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
||||
Version 1.2 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
||||
|
@ -6,7 +6,7 @@
|
||||
jc - JSON Convert Proc file output parser
|
||||
|
||||
This parser automatically identifies the Proc file and calls the
|
||||
corresponding parser to peform the parsing.
|
||||
corresponding parser to perform the parsing.
|
||||
|
||||
Magic syntax for converting `/proc` files is also supported by running
|
||||
`jc /proc/<path to file>`. Any `jc` options must be specified before the
|
||||
|
@ -114,4 +114,4 @@ Returns:
|
||||
### Parser Information
|
||||
Compatibility: linux, darwin, freebsd
|
||||
|
||||
Version 1.1 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
||||
Version 1.2 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
||||
|
@ -161,7 +161,7 @@ Schema:
|
||||
|
||||
Examples:
|
||||
|
||||
$ sshd -T | jc --sshd_conf -p
|
||||
$ sshd -T | jc --sshd-conf -p
|
||||
{
|
||||
"acceptenv": [
|
||||
"LANG",
|
||||
@ -376,7 +376,7 @@ Examples:
|
||||
"subsystem_command": "/usr/lib/openssh/sftp-server"
|
||||
}
|
||||
|
||||
$ sshd -T | jc --sshd_conf -p -r
|
||||
$ sshd -T | jc --sshd-conf -p -r
|
||||
{
|
||||
"acceptenv": [
|
||||
"LANG",
|
||||
|
@ -107,4 +107,4 @@ Returns:
|
||||
### Parser Information
|
||||
Compatibility: linux, darwin, freebsd
|
||||
|
||||
Version 1.2 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
||||
Version 1.3 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
||||
|
@ -53,7 +53,7 @@ Blank values converted to `null`/`None`.
|
||||
]
|
||||
|
||||
[0] naive timestamp if "timestamp" field is parsable, else null
|
||||
[1] timezone aware timestamp availabe for UTC, else null
|
||||
[1] timezone aware timestamp available for UTC, else null
|
||||
[2] this field exists if the syslog line is not parsable. The value
|
||||
is the original syslog line.
|
||||
|
||||
|
@ -64,7 +64,7 @@ Blank values converted to `null`/`None`.
|
||||
}
|
||||
|
||||
[0] naive timestamp if "timestamp" field is parsable, else null
|
||||
[1] timezone aware timestamp availabe for UTC, else null
|
||||
[1] timezone aware timestamp available for UTC, else null
|
||||
[2] this field exists if the syslog line is not parsable. The value
|
||||
is the original syslog line.
|
||||
|
||||
|
@ -123,4 +123,4 @@ Returns:
|
||||
### Parser Information
|
||||
Compatibility: linux
|
||||
|
||||
Version 1.1 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
||||
Version 1.2 by Kelly Brazil (kellyjonbrazil@gmail.com)
|
||||
|
@ -51,7 +51,7 @@ Add `_jc_meta` object to output line if `ignore_exceptions=True`
|
||||
### stream\_error
|
||||
|
||||
```python
|
||||
def stream_error(e: BaseException, line: str) -> Dict[str, MetadataType]
|
||||
def stream_error(e: BaseException, line: str) -> JSONDictType
|
||||
```
|
||||
|
||||
Return an error `_jc_meta` field.
|
||||
|
@ -91,7 +91,7 @@ Parameters:
|
||||
the parser. compatible options:
|
||||
linux, darwin, cygwin, win32, aix, freebsd
|
||||
|
||||
quiet: (bool) supress compatibility message if True
|
||||
quiet: (bool) suppress compatibility message if True
|
||||
|
||||
Returns:
|
||||
|
||||
@ -233,3 +233,6 @@ Returns a timestamp object with the following attributes:
|
||||
utc (int | None): aware timestamp only if UTC timezone
|
||||
detected in datetime string. None if conversion fails.
|
||||
|
||||
iso (str | None): ISO string - timezone information is output
|
||||
only if UTC timezone is detected in the datetime string.
|
||||
|
||||
|
14
jc/cli.py
14
jc/cli.py
@ -15,7 +15,7 @@ from .lib import (
|
||||
__version__, parser_info, all_parser_info, parsers, _get_parser, _parser_is_streaming,
|
||||
parser_mod_list, standard_parser_mod_list, plugin_parser_mod_list, streaming_parser_mod_list
|
||||
)
|
||||
from .jc_types import JSONDictType, AboutJCType, MetadataType, CustomColorType, ParserInfoType
|
||||
from .jc_types import JSONDictType, CustomColorType, ParserInfoType
|
||||
from . import utils
|
||||
from .cli_data import (
|
||||
long_options_map, new_pygments_colors, old_pygments_colors, helptext_preamble_string,
|
||||
@ -74,7 +74,7 @@ class JcCli():
|
||||
|
||||
def __init__(self) -> None:
|
||||
self.data_in: Optional[Union[str, bytes, TextIO]] = None
|
||||
self.data_out: Optional[Union[List[JSONDictType], JSONDictType, AboutJCType]] = None
|
||||
self.data_out: Optional[Union[List[JSONDictType], JSONDictType]] = None
|
||||
self.options: List[str] = []
|
||||
self.args: List[str] = []
|
||||
self.parser_module: Optional[ModuleType] = None
|
||||
@ -214,7 +214,7 @@ class JcCli():
|
||||
return otext
|
||||
|
||||
@staticmethod
|
||||
def about_jc() -> AboutJCType:
|
||||
def about_jc() -> JSONDictType:
|
||||
"""Return jc info and the contents of each parser.info as a dictionary"""
|
||||
return {
|
||||
'name': 'jc',
|
||||
@ -598,7 +598,7 @@ class JcCli():
|
||||
even if there are no results.
|
||||
"""
|
||||
if self.run_timestamp:
|
||||
meta_obj: MetadataType = {
|
||||
meta_obj: JSONDictType = {
|
||||
'parser': self.parser_name,
|
||||
'timestamp': self.run_timestamp.timestamp()
|
||||
}
|
||||
@ -609,9 +609,9 @@ class JcCli():
|
||||
|
||||
if isinstance(self.data_out, dict):
|
||||
if '_jc_meta' not in self.data_out:
|
||||
self.data_out['_jc_meta'] = {} # type: ignore
|
||||
self.data_out['_jc_meta'] = {}
|
||||
|
||||
self.data_out['_jc_meta'].update(meta_obj) # type: ignore
|
||||
self.data_out['_jc_meta'].update(meta_obj)
|
||||
|
||||
elif isinstance(self.data_out, list):
|
||||
if not self.data_out:
|
||||
@ -622,7 +622,7 @@ class JcCli():
|
||||
if '_jc_meta' not in item:
|
||||
item['_jc_meta'] = {}
|
||||
|
||||
item['_jc_meta'].update(meta_obj) # type: ignore
|
||||
item['_jc_meta'].update(meta_obj)
|
||||
|
||||
else:
|
||||
utils.error_message(['Parser returned an unsupported object type.'])
|
||||
|
@ -1,11 +1,9 @@
|
||||
"""jc - JSON Convert lib module"""
|
||||
|
||||
import sys
|
||||
from datetime import datetime
|
||||
from typing import Dict, List, Tuple, Iterator, Optional, Union
|
||||
from typing import Any, Dict, List, Tuple, Iterator, Optional, Union
|
||||
|
||||
JSONDictType = Dict[str, Union[str, int, float, bool, List, Dict, None]]
|
||||
MetadataType = Dict[str, Optional[Union[str, int, float, List[str], datetime]]]
|
||||
JSONDictType = Dict[str, Any]
|
||||
StreamingOutputType = Iterator[Union[JSONDictType, Tuple[BaseException, str]]]
|
||||
|
||||
if sys.version_info >= (3, 8):
|
||||
@ -45,9 +43,6 @@ else:
|
||||
TimeStampFormatType = Dict
|
||||
|
||||
|
||||
AboutJCType = Dict[str, Union[str, int, List[ParserInfoType]]]
|
||||
|
||||
|
||||
try:
|
||||
from pygments.token import (Name, Number, String, Keyword)
|
||||
CustomColorType = Dict[Union[Name.Tag, Number, String, Keyword], str]
|
||||
|
@ -9,7 +9,7 @@ from .jc_types import ParserInfoType, JSONDictType
|
||||
from jc import appdirs
|
||||
|
||||
|
||||
__version__ = '1.22.2'
|
||||
__version__ = '1.22.3'
|
||||
|
||||
parsers: List[str] = [
|
||||
'acpi',
|
||||
@ -19,10 +19,13 @@ parsers: List[str] = [
|
||||
'asciitable',
|
||||
'asciitable-m',
|
||||
'blkid',
|
||||
'cbt',
|
||||
'cef',
|
||||
'cef-s',
|
||||
'chage',
|
||||
'cksum',
|
||||
'clf',
|
||||
'clf-s',
|
||||
'crontab',
|
||||
'crontab-u',
|
||||
'csv',
|
||||
@ -82,9 +85,11 @@ parsers: List[str] = [
|
||||
'netstat',
|
||||
'nmcli',
|
||||
'ntpq',
|
||||
'openvpn',
|
||||
'os-prober',
|
||||
'passwd',
|
||||
'pci-ids',
|
||||
'pgpass',
|
||||
'pidstat',
|
||||
'pidstat-s',
|
||||
'ping',
|
||||
|
@ -567,7 +567,7 @@ class DSASignature(Sequence):
|
||||
@classmethod
|
||||
def from_p1363(cls, data):
|
||||
"""
|
||||
Reads a signature from a byte string encoding accordint to IEEE P1363,
|
||||
Reads a signature from a byte string encoding according to IEEE P1363,
|
||||
which is used by Microsoft's BCryptSignHash() function.
|
||||
|
||||
:param data:
|
||||
|
@ -247,7 +247,7 @@ class Asn1Value(object):
|
||||
|
||||
:param no_explicit:
|
||||
If explicit tagging info should be removed from this instance.
|
||||
Used internally to allow contructing the underlying value that
|
||||
Used internally to allow constructing the underlying value that
|
||||
has been wrapped in an explicit tag.
|
||||
|
||||
:param tag_type:
|
||||
@ -697,7 +697,7 @@ class Castable(object):
|
||||
if other_class.tag != self.__class__.tag:
|
||||
raise TypeError(unwrap(
|
||||
'''
|
||||
Can not covert a value from %s object to %s object since they
|
||||
Can not convert a value from %s object to %s object since they
|
||||
use different tags: %d versus %d
|
||||
''',
|
||||
type_name(other_class),
|
||||
@ -1349,7 +1349,7 @@ class Choice(Asn1Value):
|
||||
|
||||
class Concat(object):
|
||||
"""
|
||||
A class that contains two or more encoded child values concatentated
|
||||
A class that contains two or more encoded child values concatenated
|
||||
together. THIS IS NOT PART OF THE ASN.1 SPECIFICATION! This exists to handle
|
||||
the x509.TrustedCertificate() class for OpenSSL certificates containing
|
||||
extra information.
|
||||
@ -3757,7 +3757,7 @@ class Sequence(Asn1Value):
|
||||
|
||||
def _make_value(self, field_name, field_spec, value_spec, field_params, value):
|
||||
"""
|
||||
Contructs an appropriate Asn1Value object for a field
|
||||
Constructs an appropriate Asn1Value object for a field
|
||||
|
||||
:param field_name:
|
||||
A unicode string of the field name
|
||||
@ -3766,7 +3766,7 @@ class Sequence(Asn1Value):
|
||||
An Asn1Value class that is the field spec
|
||||
|
||||
:param value_spec:
|
||||
An Asn1Value class that is the vaue spec
|
||||
An Asn1Value class that is the value spec
|
||||
|
||||
:param field_params:
|
||||
None or a dict of params for the field spec
|
||||
|
194
jc/parsers/cbt.py
Normal file
194
jc/parsers/cbt.py
Normal file
@ -0,0 +1,194 @@
|
||||
"""jc - JSON Convert `cbt` command output parser (Google Bigtable)
|
||||
|
||||
Parses the human-, but not machine-, friendly output of the cbt command (for
|
||||
Google's Bigtable).
|
||||
|
||||
No effort is made to convert the data types of the values in the cells.
|
||||
|
||||
The `timestamp_epoch` calculated timestamp field is naive. (i.e. based on
|
||||
the local time of the system the parser is run on)
|
||||
|
||||
The `timestamp_epoch_utc` calculated timestamp field is timezone-aware and
|
||||
is only available if the timestamp has a UTC timezone.
|
||||
|
||||
The `timestamp_iso` calculated timestamp field will only include UTC
|
||||
timezone information if the timestamp has a UTC timezone.
|
||||
|
||||
Raw output contains all cells for each column (including timestamps), while
|
||||
the normal output contains only the latest value for each column.
|
||||
|
||||
Usage (cli):
|
||||
|
||||
$ cbt | jc --cbt
|
||||
|
||||
or
|
||||
|
||||
$ jc cbt
|
||||
|
||||
Usage (module):
|
||||
|
||||
import jc
|
||||
result = jc.parse('cbt', cbt_command_output)
|
||||
|
||||
Schema:
|
||||
|
||||
[
|
||||
{
|
||||
"key": string,
|
||||
"cells": {
|
||||
<string>: { # column family
|
||||
<string>: string # column: value
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
Schema (raw):
|
||||
|
||||
[
|
||||
{
|
||||
"key": string,
|
||||
"cells": [
|
||||
{
|
||||
"column_family": string,
|
||||
"column": string,
|
||||
"value": string,
|
||||
"timestamp_iso": string,
|
||||
"timestamp_epoch": integer,
|
||||
"timestamp_epoch_utc": integer
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
Examples:
|
||||
|
||||
$ cbt -project=$PROJECT -instance=$INSTANCE lookup $TABLE foo | jc --cbt -p
|
||||
[
|
||||
{
|
||||
"key": "foo",
|
||||
"cells": {
|
||||
"foo": {
|
||||
"bar": "baz"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
$ cbt -project=$PROJECT -instance=$INSTANCE lookup $TABLE foo | jc --cbt -p -r
|
||||
[
|
||||
{
|
||||
"key": "foo",
|
||||
"cells": [
|
||||
{
|
||||
"column_family": "foo",
|
||||
"column": "bar",
|
||||
"value": "baz1",
|
||||
"timestamp_iso": "1970-01-01T01:00:00",
|
||||
"timestamp_epoch": 32400,
|
||||
"timestamp_epoch_utc": null
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
"""
|
||||
from itertools import groupby
|
||||
from typing import List, Dict
|
||||
from jc.jc_types import JSONDictType
|
||||
import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
"""Provides parser metadata (version, author, etc.)"""
|
||||
version = '1.0'
|
||||
description = '`cbt` (Google Bigtable) command parser'
|
||||
author = 'Andreas Weiden'
|
||||
author_email = 'andreas.weiden@gmail.com'
|
||||
compatible = ['linux', 'darwin', 'cygwin', 'win32', 'aix', 'freebsd']
|
||||
magic_commands = ['cbt']
|
||||
|
||||
|
||||
__version__ = info.version
|
||||
|
||||
|
||||
def _process(proc_data: List[JSONDictType]) -> List[JSONDictType]:
|
||||
"""
|
||||
Final processing to conform to the schema.
|
||||
|
||||
Parameters:
|
||||
|
||||
proc_data: (List of Dictionaries) raw structured data to process
|
||||
|
||||
Returns:
|
||||
|
||||
List of Dictionaries. Structured to conform to the schema.
|
||||
"""
|
||||
out_data = []
|
||||
for row in proc_data:
|
||||
cells: Dict = {}
|
||||
key_func = lambda cell: (cell["column_family"], cell["column"])
|
||||
all_cells = sorted(row["cells"], key=key_func)
|
||||
for (column_family, column), group in groupby(all_cells, key=key_func):
|
||||
group_list = sorted(group, key=lambda cell: cell["timestamp_iso"], reverse=True)
|
||||
if column_family not in cells:
|
||||
cells[column_family] = {}
|
||||
cells[column_family][column] = group_list[0]["value"]
|
||||
row["cells"] = cells
|
||||
out_data.append(row)
|
||||
return out_data
|
||||
|
||||
|
||||
def parse(
|
||||
data: str,
|
||||
raw: bool = False,
|
||||
quiet: bool = False
|
||||
) -> List[JSONDictType]:
|
||||
"""
|
||||
Main text parsing function
|
||||
|
||||
Parameters:
|
||||
|
||||
data: (string) text data to parse
|
||||
raw: (boolean) unprocessed output if True
|
||||
quiet: (boolean) suppress warning messages if True
|
||||
|
||||
Returns:
|
||||
|
||||
List of Dictionaries. Raw or processed structured data.
|
||||
"""
|
||||
jc.utils.compatibility(__name__, info.compatible, quiet)
|
||||
jc.utils.input_type_check(data)
|
||||
|
||||
raw_output: List[Dict] = []
|
||||
|
||||
if jc.utils.has_data(data):
|
||||
for line in filter(None, data.split("-" * 40)):
|
||||
key = None
|
||||
cells = []
|
||||
column_name = ""
|
||||
timestamp = ""
|
||||
value_next = False
|
||||
for field in line.splitlines():
|
||||
if not field.strip():
|
||||
continue
|
||||
if field.startswith(" " * 4):
|
||||
value = field.strip(' "')
|
||||
if value_next:
|
||||
dt = jc.utils.timestamp(timestamp, format_hint=(1750, 1755))
|
||||
cells.append({
|
||||
"column_family": column_name.split(":", 1)[0],
|
||||
"column": column_name.split(":", 1)[1],
|
||||
"value": value,
|
||||
"timestamp_iso": dt.iso,
|
||||
"timestamp_epoch": dt.naive,
|
||||
"timestamp_epoch_utc": dt.utc
|
||||
})
|
||||
elif field.startswith(" " * 2):
|
||||
column_name, timestamp = map(str.strip, field.split("@"))
|
||||
value_next = True
|
||||
else:
|
||||
key = field
|
||||
if key is not None:
|
||||
raw_output.append({"key": key, "cells": cells})
|
||||
|
||||
return raw_output if raw else _process(raw_output)
|
@ -185,7 +185,7 @@ def _pycef_parse(str_input):
|
||||
|
||||
# If the input entry had any blanks in the required headers, that's wrong
|
||||
# and we should return. Note we explicitly don't check the last item in the
|
||||
# split list becuase the header ends in a '|' which means the last item
|
||||
# split list because the header ends in a '|' which means the last item
|
||||
# will always be an empty string (it doesn't exist, but the delimiter does).
|
||||
if "" in spl[0:-1]:
|
||||
raise ParseError('Blank field(s) in CEF header. Is it valid CEF format?')
|
||||
|
298
jc/parsers/clf.py
Normal file
298
jc/parsers/clf.py
Normal file
@ -0,0 +1,298 @@
|
||||
"""jc - JSON Convert Common Log Format file parser
|
||||
|
||||
This parser will handle the Common Log Format standard as specified at
|
||||
https://www.w3.org/Daemon/User/Config/Logging.html#common-logfile-format.
|
||||
|
||||
Combined Log Format is also supported. (Referer and User Agent fields added)
|
||||
|
||||
Extra fields may be present and will be enclosed in the `extra` field as
|
||||
a single string.
|
||||
|
||||
If a log line cannot be parsed, an object with an `unparsable` field will
|
||||
be present with a value of the original line.
|
||||
|
||||
The `epoch` calculated timestamp field is naive. (i.e. based on the
|
||||
local time of the system the parser is run on)
|
||||
|
||||
The `epoch_utc` calculated timestamp field is timezone-aware and is
|
||||
only available if the timezone field is UTC.
|
||||
|
||||
Usage (cli):
|
||||
|
||||
$ cat file.log | jc --clf
|
||||
|
||||
Usage (module):
|
||||
|
||||
import jc
|
||||
result = jc.parse('clf', common_log_file_output)
|
||||
|
||||
Schema:
|
||||
|
||||
Empty strings and `-` values are converted to `null`/`None`.
|
||||
|
||||
[
|
||||
{
|
||||
"host": string,
|
||||
"ident": string,
|
||||
"authuser": string,
|
||||
"date": string,
|
||||
"day": integer,
|
||||
"month": string,
|
||||
"year": integer,
|
||||
"hour": integer,
|
||||
"minute": integer,
|
||||
"second": integer,
|
||||
"tz": string,
|
||||
"request": string,
|
||||
"request_method": string,
|
||||
"request_url": string,
|
||||
"request_version": string,
|
||||
"status": integer,
|
||||
"bytes": integer,
|
||||
"referer": string,
|
||||
"user_agent": string,
|
||||
"extra": string,
|
||||
"epoch": integer, # [0]
|
||||
"epoch_utc": integer, # [1]
|
||||
"unparsable": string # [2]
|
||||
}
|
||||
]
|
||||
|
||||
[0] naive timestamp
|
||||
[1] timezone-aware timestamp. Only available if timezone field is UTC
|
||||
[2] exists if the line was not able to be parsed
|
||||
|
||||
Examples:
|
||||
|
||||
$ cat file.log | jc --clf -p
|
||||
[
|
||||
{
|
||||
"host": "127.0.0.1",
|
||||
"ident": "user-identifier",
|
||||
"authuser": "frank",
|
||||
"date": "10/Oct/2000:13:55:36 -0700",
|
||||
"day": 10,
|
||||
"month": "Oct",
|
||||
"year": 2000,
|
||||
"hour": 13,
|
||||
"minute": 55,
|
||||
"second": 36,
|
||||
"tz": "-0700",
|
||||
"request": "GET /apache_pb.gif HTTPS/1.0",
|
||||
"status": 200,
|
||||
"bytes": 2326,
|
||||
"referer": null,
|
||||
"user_agent": null,
|
||||
"extra": null,
|
||||
"request_method": "GET",
|
||||
"request_url": "/apache_pb.gif",
|
||||
"request_version": "HTTPS/1.0",
|
||||
"epoch": 971211336,
|
||||
"epoch_utc": null
|
||||
},
|
||||
{
|
||||
"host": "1.1.1.2",
|
||||
"ident": null,
|
||||
"authuser": null,
|
||||
"date": "11/Nov/2016:03:04:55 +0100",
|
||||
"day": 11,
|
||||
"month": "Nov",
|
||||
"year": 2016,
|
||||
"hour": 3,
|
||||
"minute": 4,
|
||||
"second": 55,
|
||||
"tz": "+0100",
|
||||
"request": "GET /",
|
||||
"status": 200,
|
||||
"bytes": 83,
|
||||
"referer": null,
|
||||
"user_agent": null,
|
||||
"extra": "- 9221 1.1.1.1",
|
||||
"request_method": "GET",
|
||||
"request_url": "/",
|
||||
"request_version": null,
|
||||
"epoch": 1478862295,
|
||||
"epoch_utc": null
|
||||
},
|
||||
...
|
||||
]
|
||||
|
||||
$ cat file.log | jc --clf -p -r
|
||||
[
|
||||
{
|
||||
"host": "127.0.0.1",
|
||||
"ident": "user-identifier",
|
||||
"authuser": "frank",
|
||||
"date": "10/Oct/2000:13:55:36 -0700",
|
||||
"day": "10",
|
||||
"month": "Oct",
|
||||
"year": "2000",
|
||||
"hour": "13",
|
||||
"minute": "55",
|
||||
"second": "36",
|
||||
"tz": "-0700",
|
||||
"request": "GET /apache_pb.gif HTTPS/1.0",
|
||||
"status": "200",
|
||||
"bytes": "2326",
|
||||
"referer": null,
|
||||
"user_agent": null,
|
||||
"extra": "",
|
||||
"request_method": "GET",
|
||||
"request_url": "/apache_pb.gif",
|
||||
"request_version": "HTTPS/1.0"
|
||||
},
|
||||
{
|
||||
"host": "1.1.1.2",
|
||||
"ident": "-",
|
||||
"authuser": "-",
|
||||
"date": "11/Nov/2016:03:04:55 +0100",
|
||||
"day": "11",
|
||||
"month": "Nov",
|
||||
"year": "2016",
|
||||
"hour": "03",
|
||||
"minute": "04",
|
||||
"second": "55",
|
||||
"tz": "+0100",
|
||||
"request": "GET /",
|
||||
"status": "200",
|
||||
"bytes": "83",
|
||||
"referer": "-",
|
||||
"user_agent": "-",
|
||||
"extra": "- 9221 1.1.1.1",
|
||||
"request_method": "GET",
|
||||
"request_url": "/",
|
||||
"request_version": null
|
||||
},
|
||||
...
|
||||
]
|
||||
"""
|
||||
import re
|
||||
from typing import List, Dict
|
||||
from jc.jc_types import JSONDictType
|
||||
import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
"""Provides parser metadata (version, author, etc.)"""
|
||||
version = '1.0'
|
||||
description = 'Common and Combined Log Format file parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
compatible = ['linux', 'darwin', 'cygwin', 'win32', 'aix', 'freebsd']
|
||||
|
||||
|
||||
__version__ = info.version
|
||||
|
||||
|
||||
def _process(proc_data: List[JSONDictType]) -> List[JSONDictType]:
|
||||
"""
|
||||
Final processing to conform to the schema.
|
||||
|
||||
Parameters:
|
||||
|
||||
proc_data: (List of Dictionaries) raw structured data to process
|
||||
|
||||
Returns:
|
||||
|
||||
List of Dictionaries. Structured to conform to the schema.
|
||||
"""
|
||||
int_list = {'day', 'year', 'hour', 'minute', 'second', 'status', 'bytes'}
|
||||
|
||||
for log in proc_data:
|
||||
for key, val in log.items():
|
||||
|
||||
# integer conversions
|
||||
if key in int_list:
|
||||
log[key] = jc.utils.convert_to_int(val)
|
||||
|
||||
# convert `-` and blank values to None
|
||||
if val == '-' or val == '':
|
||||
log[key] = None
|
||||
|
||||
# add unix timestamps
|
||||
if 'date' in log:
|
||||
ts = jc.utils.timestamp(log['date'], format_hint=(1800,))
|
||||
log['epoch'] = ts.naive
|
||||
log['epoch_utc'] = ts.utc
|
||||
|
||||
return proc_data
|
||||
|
||||
|
||||
def parse(
|
||||
data: str,
|
||||
raw: bool = False,
|
||||
quiet: bool = False
|
||||
) -> List[JSONDictType]:
|
||||
"""
|
||||
Main text parsing function
|
||||
|
||||
Parameters:
|
||||
|
||||
data: (string) text data to parse
|
||||
raw: (boolean) unprocessed output if True
|
||||
quiet: (boolean) suppress warning messages if True
|
||||
|
||||
Returns:
|
||||
|
||||
List of Dictionaries. Raw or processed structured data.
|
||||
"""
|
||||
jc.utils.compatibility(__name__, info.compatible, quiet)
|
||||
jc.utils.input_type_check(data)
|
||||
|
||||
raw_output: List[Dict] = []
|
||||
output_line: Dict = {}
|
||||
|
||||
clf_pattern = re.compile(r'''
|
||||
^(?P<host>-|\S+)\s
|
||||
(?P<ident>-|\S+)\s
|
||||
(?P<authuser>-|\S+)\s
|
||||
\[
|
||||
(?P<date>
|
||||
(?P<day>\d+)/
|
||||
(?P<month>\S\S\S)/
|
||||
(?P<year>\d\d\d\d):
|
||||
(?P<hour>\d\d):
|
||||
(?P<minute>\d\d):
|
||||
(?P<second>\d\d)\s
|
||||
(?P<tz>\S+)
|
||||
)
|
||||
\]\s
|
||||
\"(?P<request>.*?)\"\s
|
||||
(?P<status>-|\d\d\d)\s
|
||||
(?P<bytes>-|\d+)\s?
|
||||
(?:\"(?P<referer>.*?)\"\s?)?
|
||||
(?:\"(?P<user_agent>.*?)\"\s?)?
|
||||
(?P<extra>.*)
|
||||
''', re.VERBOSE
|
||||
)
|
||||
|
||||
request_pattern = re.compile(r'''
|
||||
(?P<request_method>\S+)\s
|
||||
(?P<request_url>.*?(?=\sHTTPS?/|$))\s? # positive lookahead for HTTP(S)/ or end of string
|
||||
(?P<request_version>HTTPS?/[\d\.]+)?
|
||||
''', re.VERBOSE
|
||||
)
|
||||
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
for line in filter(None, data.splitlines()):
|
||||
output_line = {}
|
||||
clf_match = re.match(clf_pattern, line)
|
||||
|
||||
if clf_match:
|
||||
output_line = clf_match.groupdict()
|
||||
|
||||
if output_line.get('request', None):
|
||||
request_string = output_line['request']
|
||||
request_match = re.match(request_pattern, request_string)
|
||||
if request_match:
|
||||
output_line.update(request_match.groupdict())
|
||||
|
||||
raw_output.append(output_line)
|
||||
|
||||
else:
|
||||
raw_output.append(
|
||||
{"unparsable": line}
|
||||
)
|
||||
|
||||
return raw_output if raw else _process(raw_output)
|
223
jc/parsers/clf_s.py
Normal file
223
jc/parsers/clf_s.py
Normal file
@ -0,0 +1,223 @@
|
||||
"""jc - JSON Convert Common Log Format file streaming parser
|
||||
|
||||
> This streaming parser outputs JSON Lines (cli) or returns an Iterable of
|
||||
> Dictionaries (module)
|
||||
|
||||
This parser will handle the Common Log Format standard as specified at
|
||||
https://www.w3.org/Daemon/User/Config/Logging.html#common-logfile-format.
|
||||
|
||||
Combined Log Format is also supported. (Referer and User Agent fields added)
|
||||
|
||||
Extra fields may be present and will be enclosed in the `extra` field as
|
||||
a single string.
|
||||
|
||||
If a log line cannot be parsed, an object with an `unparsable` field will
|
||||
be present with a value of the original line.
|
||||
|
||||
The `epoch` calculated timestamp field is naive. (i.e. based on the
|
||||
local time of the system the parser is run on)
|
||||
|
||||
The `epoch_utc` calculated timestamp field is timezone-aware and is
|
||||
only available if the timezone field is UTC.
|
||||
|
||||
Usage (cli):
|
||||
|
||||
$ cat file.log | jc --clf-s
|
||||
|
||||
Usage (module):
|
||||
|
||||
import jc
|
||||
|
||||
result = jc.parse('clf_s', common_log_file_output.splitlines())
|
||||
for item in result:
|
||||
# do something
|
||||
|
||||
Schema:
|
||||
|
||||
Empty strings and `-` values are converted to `null`/`None`.
|
||||
|
||||
{
|
||||
"host": string,
|
||||
"ident": string,
|
||||
"authuser": string,
|
||||
"date": string,
|
||||
"day": integer,
|
||||
"month": string,
|
||||
"year": integer,
|
||||
"hour": integer,
|
||||
"minute": integer,
|
||||
"second": integer,
|
||||
"tz": string,
|
||||
"request": string,
|
||||
"request_method": string,
|
||||
"request_url": string,
|
||||
"request_version": string,
|
||||
"status": integer,
|
||||
"bytes": integer,
|
||||
"referer": string,
|
||||
"user_agent": string,
|
||||
"extra": string,
|
||||
"epoch": integer, # [0]
|
||||
"epoch_utc": integer, # [1]
|
||||
"unparsable": string # [2]
|
||||
}
|
||||
|
||||
[0] naive timestamp
|
||||
[1] timezone-aware timestamp. Only available if timezone field is UTC
|
||||
[2] exists if the line was not able to be parsed
|
||||
|
||||
Examples:
|
||||
|
||||
$ cat file.log | jc --clf-s
|
||||
{"host":"127.0.0.1","ident":"user-identifier","authuser":"frank","...}
|
||||
{"host":"1.1.1.2","ident":null,"authuser":null,"date":"11/Nov/2016...}
|
||||
...
|
||||
|
||||
$ cat file.log | jc --clf-s -r
|
||||
{"host":"127.0.0.1","ident":"user-identifier","authuser":"frank","...}
|
||||
{"host":"1.1.1.2","ident":"-","authuser":"-","date":"11/Nov/2016:0...}
|
||||
...
|
||||
"""
|
||||
import re
|
||||
from typing import Dict, Iterable
|
||||
import jc.utils
|
||||
from jc.streaming import (
|
||||
add_jc_meta, streaming_input_type_check, streaming_line_input_type_check, raise_or_yield
|
||||
)
|
||||
from jc.jc_types import JSONDictType, StreamingOutputType
|
||||
from jc.exceptions import ParseError
|
||||
|
||||
|
||||
class info():
|
||||
"""Provides parser metadata (version, author, etc.)"""
|
||||
version = '1.0'
|
||||
description = 'Common and Combined Log Format file streaming parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
compatible = ['linux', 'darwin', 'cygwin', 'win32', 'aix', 'freebsd']
|
||||
streaming = True
|
||||
|
||||
|
||||
__version__ = info.version
|
||||
|
||||
|
||||
def _process(proc_data: JSONDictType) -> JSONDictType:
|
||||
"""
|
||||
Final processing to conform to the schema.
|
||||
|
||||
Parameters:
|
||||
|
||||
proc_data: (Dictionary) raw structured data to process
|
||||
|
||||
Returns:
|
||||
|
||||
Dictionary. Structured data to conform to the schema.
|
||||
"""
|
||||
int_list = {'day', 'year', 'hour', 'minute', 'second', 'status', 'bytes'}
|
||||
|
||||
for key, val in proc_data.items():
|
||||
|
||||
# integer conversions
|
||||
if key in int_list:
|
||||
proc_data[key] = jc.utils.convert_to_int(val)
|
||||
|
||||
# convert `-` and blank values to None
|
||||
if val == '-' or val == '':
|
||||
proc_data[key] = None
|
||||
|
||||
# add unix timestamps
|
||||
if 'date' in proc_data:
|
||||
ts = jc.utils.timestamp(proc_data['date'], format_hint=(1800,))
|
||||
proc_data['epoch'] = ts.naive
|
||||
proc_data['epoch_utc'] = ts.utc
|
||||
|
||||
return proc_data
|
||||
|
||||
|
||||
@add_jc_meta
|
||||
def parse(
|
||||
data: Iterable[str],
|
||||
raw: bool = False,
|
||||
quiet: bool = False,
|
||||
ignore_exceptions: bool = False
|
||||
) -> StreamingOutputType:
|
||||
"""
|
||||
Main text parsing generator function. Returns an iterable object.
|
||||
|
||||
Parameters:
|
||||
|
||||
data: (iterable) line-based text data to parse
|
||||
(e.g. sys.stdin or str.splitlines())
|
||||
|
||||
raw: (boolean) unprocessed output if True
|
||||
quiet: (boolean) suppress warning messages if True
|
||||
ignore_exceptions: (boolean) ignore parsing exceptions if True
|
||||
|
||||
|
||||
Returns:
|
||||
|
||||
Iterable of Dictionaries
|
||||
"""
|
||||
jc.utils.compatibility(__name__, info.compatible, quiet)
|
||||
streaming_input_type_check(data)
|
||||
|
||||
clf_pattern = re.compile(r'''
|
||||
^(?P<host>-|\S+)\s
|
||||
(?P<ident>-|\S+)\s
|
||||
(?P<authuser>-|\S+)\s
|
||||
\[
|
||||
(?P<date>
|
||||
(?P<day>\d+)/
|
||||
(?P<month>\S\S\S)/
|
||||
(?P<year>\d\d\d\d):
|
||||
(?P<hour>\d\d):
|
||||
(?P<minute>\d\d):
|
||||
(?P<second>\d\d)\s
|
||||
(?P<tz>\S+)
|
||||
)
|
||||
\]\s
|
||||
\"(?P<request>.*?)\"\s
|
||||
(?P<status>-|\d\d\d)\s
|
||||
(?P<bytes>-|\d+)\s?
|
||||
(?:\"(?P<referer>.*?)\"\s?)?
|
||||
(?:\"(?P<user_agent>.*?)\"\s?)?
|
||||
(?P<extra>.*)
|
||||
''', re.VERBOSE
|
||||
)
|
||||
|
||||
request_pattern = re.compile(r'''
|
||||
(?P<request_method>\S+)\s
|
||||
(?P<request_url>.*?(?=\sHTTPS?/|$))\s? # positive lookahead for HTTP(S)/ or end of string
|
||||
(?P<request_version>HTTPS?/[\d\.]+)?
|
||||
''', re.VERBOSE
|
||||
)
|
||||
|
||||
for line in data:
|
||||
try:
|
||||
streaming_line_input_type_check(line)
|
||||
output_line: Dict = {}
|
||||
|
||||
if not line.strip():
|
||||
continue
|
||||
|
||||
clf_match = re.match(clf_pattern, line)
|
||||
|
||||
if clf_match:
|
||||
output_line = clf_match.groupdict()
|
||||
|
||||
if output_line.get('request', None):
|
||||
request_string = output_line['request']
|
||||
request_match = re.match(request_pattern, request_string)
|
||||
if request_match:
|
||||
output_line.update(request_match.groupdict())
|
||||
|
||||
else:
|
||||
output_line = {"unparsable": line.strip()}
|
||||
|
||||
if output_line:
|
||||
yield output_line if raw else _process(output_line)
|
||||
else:
|
||||
raise ParseError('Not Common Log Format data')
|
||||
|
||||
except Exception as e:
|
||||
yield raise_or_yield(ignore_exceptions, e, line)
|
@ -1,7 +1,7 @@
|
||||
"""jc - JSON Convert ISO 8601 Datetime string parser
|
||||
|
||||
This parser supports standard ISO 8601 strings that include both date and
|
||||
time. If no timezone or offset information is available in the sring, then
|
||||
time. If no timezone or offset information is available in the string, then
|
||||
UTC timezone is used.
|
||||
|
||||
Usage (cli):
|
||||
|
@ -101,7 +101,7 @@ Schema:
|
||||
]
|
||||
|
||||
[0] naive timestamp if "when" field is parsable, else null
|
||||
[1] timezone aware timestamp availabe for UTC, else null
|
||||
[1] timezone aware timestamp available for UTC, else null
|
||||
|
||||
Examples:
|
||||
|
||||
|
@ -67,7 +67,6 @@ Examples:
|
||||
...
|
||||
]
|
||||
|
||||
|
||||
$ findmnt | jc --findmnt -p -r
|
||||
[
|
||||
{
|
||||
@ -123,7 +122,7 @@ def _process(proc_data: List[JSONDictType]) -> List[JSONDictType]:
|
||||
kv_options = {}
|
||||
|
||||
if 'options' in item:
|
||||
opt_list = item['options'].split(',') # type: ignore
|
||||
opt_list = item['options'].split(',')
|
||||
|
||||
for option in opt_list:
|
||||
if '=' in option:
|
||||
|
@ -117,6 +117,10 @@ def parse(
|
||||
streaming_line_input_type_check(line)
|
||||
output_line: Dict = {}
|
||||
|
||||
# skip blank lines
|
||||
if not line.strip():
|
||||
continue
|
||||
|
||||
# parse the content here
|
||||
# check out helper functions in jc.utils
|
||||
# and jc.parsers.universal
|
||||
|
@ -35,13 +35,13 @@ Schema:
|
||||
[
|
||||
{
|
||||
"commit": string,
|
||||
"author": string,
|
||||
"author_email": string,
|
||||
"author": string/null,
|
||||
"author_email": string/null,
|
||||
"date": string,
|
||||
"epoch": integer, # [0]
|
||||
"epoch_utc": integer, # [1]
|
||||
"commit_by": string,
|
||||
"commit_by_email": string,
|
||||
"commit_by": string/null,
|
||||
"commit_by_email": string/null,
|
||||
"commit_by_date": string,
|
||||
"message": string,
|
||||
"stats" : {
|
||||
@ -56,7 +56,7 @@ Schema:
|
||||
]
|
||||
|
||||
[0] naive timestamp if "date" field is parsable, else null
|
||||
[1] timezone aware timestamp availabe for UTC, else null
|
||||
[1] timezone aware timestamp available for UTC, else null
|
||||
|
||||
Examples:
|
||||
|
||||
@ -153,7 +153,7 @@ changes_pattern = re.compile(r'\s(?P<files>\d+)\s+(files? changed),\s+(?P<insert
|
||||
|
||||
class info():
|
||||
"""Provides parser metadata (version, author, etc.)"""
|
||||
version = '1.2'
|
||||
version = '1.3'
|
||||
description = '`git log` command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@ -202,6 +202,28 @@ def _is_commit_hash(hash_string: str) -> bool:
|
||||
|
||||
return False
|
||||
|
||||
def _parse_name_email(line):
|
||||
values = line.rsplit(maxsplit=1)
|
||||
name = None
|
||||
email = None
|
||||
|
||||
if len(values) == 2:
|
||||
name = values[0]
|
||||
if values[1].startswith('<') and values[1].endswith('>'):
|
||||
email = values[1][1:-1]
|
||||
else:
|
||||
if values[0].lstrip().startswith('<') and values[0].endswith('>'):
|
||||
email = values[0].lstrip()[1:-1]
|
||||
else:
|
||||
name = values[0]
|
||||
|
||||
if not name:
|
||||
name = None
|
||||
if not email:
|
||||
email = None # covers '<>' case turning into null, not ''
|
||||
|
||||
return name, email
|
||||
|
||||
|
||||
def parse(
|
||||
data: str,
|
||||
@ -271,9 +293,7 @@ def parse(
|
||||
continue
|
||||
|
||||
if line.startswith('Author: '):
|
||||
values = line_list[1].rsplit(maxsplit=1)
|
||||
output_line['author'] = values[0]
|
||||
output_line['author_email'] = values[1].strip('<').strip('>')
|
||||
output_line['author'], output_line['author_email'] = _parse_name_email(line_list[1])
|
||||
continue
|
||||
|
||||
if line.startswith('Date: '):
|
||||
@ -289,9 +309,7 @@ def parse(
|
||||
continue
|
||||
|
||||
if line.startswith('Commit: '):
|
||||
values = line_list[1].rsplit(maxsplit=1)
|
||||
output_line['commit_by'] = values[0]
|
||||
output_line['commit_by_email'] = values[1].strip('<').strip('>')
|
||||
output_line['commit_by'], output_line['commit_by_email'] = _parse_name_email(line_list[1])
|
||||
continue
|
||||
|
||||
if line.startswith(' '):
|
||||
|
@ -36,13 +36,13 @@ Schema:
|
||||
|
||||
{
|
||||
"commit": string,
|
||||
"author": string,
|
||||
"author_email": string,
|
||||
"author": string/null,
|
||||
"author_email": string/null,
|
||||
"date": string,
|
||||
"epoch": integer, # [0]
|
||||
"epoch_utc": integer, # [1]
|
||||
"commit_by": string,
|
||||
"commit_by_email": string,
|
||||
"commit_by": string/null,
|
||||
"commit_by_email": string/null,
|
||||
"commit_by_date": string,
|
||||
"message": string,
|
||||
"stats" : {
|
||||
@ -63,7 +63,7 @@ Schema:
|
||||
}
|
||||
|
||||
[0] naive timestamp if "date" field is parsable, else null
|
||||
[1] timezone aware timestamp availabe for UTC, else null
|
||||
[1] timezone aware timestamp available for UTC, else null
|
||||
|
||||
Examples:
|
||||
|
||||
@ -75,6 +75,7 @@ Examples:
|
||||
import re
|
||||
from typing import List, Dict, Iterable, Union
|
||||
import jc.utils
|
||||
from jc.parsers.git_log import _parse_name_email
|
||||
from jc.streaming import (
|
||||
add_jc_meta, streaming_input_type_check, streaming_line_input_type_check, raise_or_yield
|
||||
)
|
||||
@ -87,7 +88,7 @@ changes_pattern = re.compile(r'\s(?P<files>\d+)\s+(files? changed),\s+(?P<insert
|
||||
|
||||
class info():
|
||||
"""Provides parser metadata (version, author, etc.)"""
|
||||
version = '1.2'
|
||||
version = '1.3'
|
||||
description = '`git log` command streaming parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@ -215,9 +216,7 @@ def parse(
|
||||
continue
|
||||
|
||||
if line.startswith('Author: '):
|
||||
values = line_list[1].rsplit(maxsplit=1)
|
||||
output_line['author'] = values[0]
|
||||
output_line['author_email'] = values[1].strip('<').strip('>')
|
||||
output_line['author'], output_line['author_email'] = _parse_name_email(line_list[1])
|
||||
continue
|
||||
|
||||
if line.startswith('Date: '):
|
||||
@ -233,9 +232,7 @@ def parse(
|
||||
continue
|
||||
|
||||
if line.startswith('Commit: '):
|
||||
values = line_list[1].rsplit(maxsplit=1)
|
||||
output_line['commit_by'] = values[0]
|
||||
output_line['commit_by_email'] = values[1].strip('<').strip('>')
|
||||
output_line['commit_by'], output_line['commit_by_email'] = _parse_name_email(line_list[1])
|
||||
continue
|
||||
|
||||
if line.startswith(' '):
|
||||
|
@ -59,7 +59,7 @@ Examples:
|
||||
...
|
||||
]
|
||||
"""
|
||||
from typing import List, Dict
|
||||
from typing import List, Union
|
||||
from jc.jc_types import JSONDictType
|
||||
import jc.utils
|
||||
|
||||
@ -94,7 +94,7 @@ def _process(proc_data: List[JSONDictType]) -> JSONDictType:
|
||||
for item in proc_data:
|
||||
new_dict.update(
|
||||
{
|
||||
item['reference']: item['commit'] # type: ignore
|
||||
item['reference']: item['commit']
|
||||
}
|
||||
)
|
||||
|
||||
@ -105,7 +105,7 @@ def parse(
|
||||
data: str,
|
||||
raw: bool = False,
|
||||
quiet: bool = False
|
||||
) -> List[JSONDictType]:
|
||||
) -> Union[JSONDictType, List[JSONDictType]]:
|
||||
"""
|
||||
Main text parsing function
|
||||
|
||||
@ -122,7 +122,8 @@ def parse(
|
||||
jc.utils.compatibility(__name__, info.compatible, quiet)
|
||||
jc.utils.input_type_check(data)
|
||||
|
||||
raw_output: List[Dict] = []
|
||||
raw_output: List[JSONDictType] = []
|
||||
output_line: JSONDictType = {}
|
||||
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
@ -135,4 +136,4 @@ def parse(
|
||||
}
|
||||
raw_output.append(output_line)
|
||||
|
||||
return raw_output if raw else _process(raw_output) # type: ignore
|
||||
return raw_output if raw else _process(raw_output)
|
||||
|
@ -1,4 +1,4 @@
|
||||
"""jc - JSON Convert `foo` command output parser
|
||||
"""jc - JSON Convert `ifconfig` command output parser
|
||||
|
||||
No `ifconfig` options are supported.
|
||||
|
||||
@ -37,6 +37,7 @@ Schema:
|
||||
"ipv6_addr": string, # [0]
|
||||
"ipv6_mask": integer, # [0]
|
||||
"ipv6_scope": string, # [0]
|
||||
"ipv6_scope_id": string, # [0]
|
||||
"ipv6_type": string, # [0]
|
||||
"rx_packets": integer,
|
||||
"rx_bytes": integer,
|
||||
@ -82,10 +83,19 @@ Schema:
|
||||
"ipv6: [
|
||||
{
|
||||
"address": string,
|
||||
"scope_id": string,
|
||||
"mask": integer,
|
||||
"scope": string,
|
||||
"type": string
|
||||
}
|
||||
],
|
||||
"lanes": [
|
||||
{
|
||||
"lane": integer,
|
||||
"rx_power_mw": float,
|
||||
"rx_power_dbm": float,
|
||||
"tx_bias_ma": float
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
@ -142,6 +152,7 @@ Examples:
|
||||
"ipv6": [
|
||||
{
|
||||
"address": "fe80::c1cb:715d:bc3e:b8a0",
|
||||
"scope_id": null,
|
||||
"mask": 64,
|
||||
"scope": "0x20",
|
||||
"type": "link"
|
||||
@ -190,6 +201,7 @@ Examples:
|
||||
"ipv6": [
|
||||
{
|
||||
"address": "fe80::c1cb:715d:bc3e:b8a0",
|
||||
"scope_id": null,
|
||||
"mask": "64",
|
||||
"scope": "0x20",
|
||||
"type": "link"
|
||||
@ -200,14 +212,14 @@ Examples:
|
||||
"""
|
||||
import re
|
||||
from ipaddress import IPv4Network
|
||||
from typing import List, Dict
|
||||
from typing import List, Dict, Optional
|
||||
from jc.jc_types import JSONDictType
|
||||
import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
"""Provides parser metadata (version, author, etc.)"""
|
||||
version = '2.0'
|
||||
version = '2.1'
|
||||
description = '`ifconfig` command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@ -218,7 +230,7 @@ class info():
|
||||
__version__ = info.version
|
||||
|
||||
|
||||
def _convert_cidr_to_quad(string):
|
||||
def _convert_cidr_to_quad(string: str) -> str:
|
||||
return str(IPv4Network('0.0.0.0/' + string).netmask)
|
||||
|
||||
|
||||
@ -237,8 +249,9 @@ def _process(proc_data: List[JSONDictType]) -> List[JSONDictType]:
|
||||
int_list = {
|
||||
'flags', 'mtu', 'ipv6_mask', 'rx_packets', 'rx_bytes', 'rx_errors', 'rx_dropped',
|
||||
'rx_overruns', 'rx_frame', 'tx_packets', 'tx_bytes', 'tx_errors', 'tx_dropped',
|
||||
'tx_overruns', 'tx_carrier', 'tx_collisions', 'metric', 'nd6_options'
|
||||
'tx_overruns', 'tx_carrier', 'tx_collisions', 'metric', 'nd6_options', 'lane'
|
||||
}
|
||||
float_list = {'rx_power_mw', 'rx_power_dbm', 'tx_bias_ma'}
|
||||
|
||||
for entry in proc_data:
|
||||
for key in entry:
|
||||
@ -248,63 +261,72 @@ def _process(proc_data: List[JSONDictType]) -> List[JSONDictType]:
|
||||
# convert OSX-style subnet mask to dotted quad
|
||||
if 'ipv4_mask' in entry:
|
||||
try:
|
||||
if entry['ipv4_mask'].startswith('0x'): # type: ignore
|
||||
if entry['ipv4_mask'].startswith('0x'):
|
||||
new_mask = entry['ipv4_mask']
|
||||
new_mask = new_mask.lstrip('0x') # type: ignore
|
||||
new_mask = new_mask.lstrip('0x')
|
||||
new_mask = '.'.join(str(int(i, 16)) for i in [new_mask[i:i + 2] for i in range(0, len(new_mask), 2)])
|
||||
entry['ipv4_mask'] = new_mask
|
||||
except (ValueError, TypeError, AttributeError):
|
||||
pass
|
||||
|
||||
# for new-style freebsd output convert CIDR mask to dotted-quad to match other output
|
||||
if entry['ipv4_mask'] and not '.' in entry['ipv4_mask']: # type: ignore
|
||||
if entry['ipv4_mask'] and not '.' in entry['ipv4_mask']:
|
||||
entry['ipv4_mask'] = _convert_cidr_to_quad(entry['ipv4_mask'])
|
||||
|
||||
# convert state value to an array
|
||||
if 'state' in entry:
|
||||
try:
|
||||
new_state = entry['state'].split(',') # type: ignore
|
||||
new_state = entry['state'].split(',')
|
||||
entry['state'] = new_state
|
||||
except (ValueError, TypeError, AttributeError):
|
||||
pass
|
||||
|
||||
# conversions for list of ipv4 addresses
|
||||
if 'ipv4' in entry:
|
||||
for ip_address in entry['ipv4']: # type: ignore
|
||||
for ip_address in entry['ipv4']:
|
||||
if 'mask' in ip_address:
|
||||
try:
|
||||
if ip_address['mask'].startswith('0x'): # type: ignore
|
||||
new_mask = ip_address['mask'] # type: ignore
|
||||
if ip_address['mask'].startswith('0x'):
|
||||
new_mask = ip_address['mask']
|
||||
new_mask = new_mask.lstrip('0x')
|
||||
new_mask = '.'.join(str(int(i, 16)) for i in [new_mask[i:i + 2] for i in range(0, len(new_mask), 2)])
|
||||
ip_address['mask'] = new_mask # type: ignore
|
||||
ip_address['mask'] = new_mask
|
||||
except (ValueError, TypeError, AttributeError):
|
||||
pass
|
||||
|
||||
# for new-style freebsd output convert CIDR mask to dotted-quad to match other output
|
||||
if ip_address['mask'] and not '.' in ip_address['mask']: # type: ignore
|
||||
ip_address['mask'] = _convert_cidr_to_quad(ip_address['mask']) # type: ignore
|
||||
if ip_address['mask'] and not '.' in ip_address['mask']:
|
||||
ip_address['mask'] = _convert_cidr_to_quad(ip_address['mask'])
|
||||
|
||||
# conversions for list of ipv6 addresses
|
||||
if 'ipv6' in entry:
|
||||
for ip_address in entry['ipv6']: # type: ignore
|
||||
for ip_address in entry['ipv6']:
|
||||
if 'mask' in ip_address:
|
||||
ip_address['mask'] = jc.utils.convert_to_int(ip_address['mask']) # type: ignore
|
||||
ip_address['mask'] = jc.utils.convert_to_int(ip_address['mask'])
|
||||
|
||||
# conversions for list of lanes
|
||||
if 'lanes' in entry:
|
||||
for lane_item in entry['lanes']:
|
||||
for key in lane_item:
|
||||
if key in int_list:
|
||||
lane_item[key] = jc.utils.convert_to_int(lane_item[key])
|
||||
if key in float_list:
|
||||
lane_item[key] = jc.utils.convert_to_float(lane_item[key])
|
||||
|
||||
# final conversions
|
||||
if entry.get('media_flags', None):
|
||||
entry['media_flags'] = entry['media_flags'].split(',') # type: ignore
|
||||
entry['media_flags'] = entry['media_flags'].split(',')
|
||||
|
||||
if entry.get('nd6_flags', None):
|
||||
entry['nd6_flags'] = entry['nd6_flags'].split(',') # type: ignore
|
||||
entry['nd6_flags'] = entry['nd6_flags'].split(',')
|
||||
|
||||
if entry.get('options_flags', None):
|
||||
entry['options_flags'] = entry['options_flags'].split(',') # type: ignore
|
||||
entry['options_flags'] = entry['options_flags'].split(',')
|
||||
|
||||
return proc_data
|
||||
|
||||
|
||||
def _bundle_match(pattern_list, string):
|
||||
def _bundle_match(pattern_list: List[re.Pattern], string: str) -> Optional[re.Match]:
|
||||
"""Returns a match object if a string matches one of a list of patterns.
|
||||
If no match is found, returns None"""
|
||||
for pattern in pattern_list:
|
||||
@ -371,6 +393,7 @@ def parse(
|
||||
interface_item: Dict = interface_obj.copy()
|
||||
ipv4_info: List = []
|
||||
ipv6_info: List = []
|
||||
lane_info: List = []
|
||||
|
||||
# Below regular expression patterns are based off of the work of:
|
||||
# https://github.com/KnightWhoSayNi/ifconfig-parser
|
||||
@ -525,9 +548,10 @@ def parse(
|
||||
''', re.IGNORECASE | re.VERBOSE
|
||||
)
|
||||
re_freebsd_ipv6 = re.compile(r'''
|
||||
\s?inet6\s(?P<address>.*)(?:\%\w+\d+)\s
|
||||
prefixlen\s(?P<mask>\d+)(?:\s\w+)?\s
|
||||
scopeid\s(?P<scope>\w+x\w+)
|
||||
\s?inet6\s(?P<address>.*?)
|
||||
(?:\%(?P<scope_id>\w+\d+))?\s
|
||||
prefixlen\s(?P<mask>\d+).*?(?=scopeid|$) # positive lookahead for scopeid or end of line
|
||||
(?:scopeid\s(?P<scope>0x\w+))?
|
||||
''', re.IGNORECASE | re.VERBOSE
|
||||
)
|
||||
re_freebsd_details = re.compile(r'''
|
||||
@ -581,6 +605,15 @@ def parse(
|
||||
''', re.IGNORECASE | re.VERBOSE
|
||||
)
|
||||
|
||||
# this pattern is special since it is used to build the lane_info list
|
||||
re_freebsd_lane = re.compile(r'''
|
||||
lane\s(?P<lane>\d+):\s
|
||||
RX\spower:\s(?P<rx_power_mw>\S+)\smW\s
|
||||
\((?P<rx_power_dbm>\S+)\sdBm\)\s
|
||||
TX\sbias:\s(?P<tx_bias_ma>\S+)
|
||||
''', re.IGNORECASE | re.VERBOSE
|
||||
)
|
||||
|
||||
re_linux = [
|
||||
re_linux_interface, re_linux_ipv4, re_linux_ipv6, re_linux_state, re_linux_rx, re_linux_tx,
|
||||
re_linux_bytes, re_linux_tx_stats
|
||||
@ -611,10 +644,13 @@ def parse(
|
||||
interface_item['ipv4'] = ipv4_info
|
||||
if ipv6_info:
|
||||
interface_item['ipv6'] = ipv6_info
|
||||
if lane_info:
|
||||
interface_item['lanes'] = lane_info
|
||||
raw_output.append(interface_item)
|
||||
interface_item = interface_obj.copy()
|
||||
ipv4_info = []
|
||||
ipv6_info = []
|
||||
lane_info = []
|
||||
|
||||
interface_item.update(interface_match.groupdict())
|
||||
continue
|
||||
@ -646,6 +682,7 @@ def parse(
|
||||
'mask': 'ipv6_mask',
|
||||
'broadcast': 'ipv6_bcast',
|
||||
'scope': 'ipv6_scope',
|
||||
'scope_id': 'ipv6_scope_id',
|
||||
'type': 'ipv6_type'
|
||||
}
|
||||
for k, v in ipv6_dict.copy().items():
|
||||
@ -666,6 +703,12 @@ def parse(
|
||||
ipv6_info.append(ipv6_match.groupdict())
|
||||
continue
|
||||
|
||||
# lane information lines
|
||||
lane_match = re.search(re_freebsd_lane, line)
|
||||
if lane_match:
|
||||
lane_info.append(lane_match.groupdict())
|
||||
continue
|
||||
|
||||
# All other lines
|
||||
other_match = _bundle_match(re_linux + re_openbsd + re_freebsd, line)
|
||||
if other_match:
|
||||
@ -677,6 +720,8 @@ def parse(
|
||||
interface_item['ipv4'] = ipv4_info
|
||||
if ipv6_info:
|
||||
interface_item['ipv6'] = ipv6_info
|
||||
if lane_info:
|
||||
interface_item['lanes'] = lane_info
|
||||
raw_output.append(interface_item)
|
||||
|
||||
return raw_output if raw else _process(raw_output)
|
||||
|
@ -108,7 +108,7 @@ import jc.parsers.universal
|
||||
|
||||
class info():
|
||||
"""Provides parser metadata (version, author, etc.)"""
|
||||
version = '1.2'
|
||||
version = '1.3'
|
||||
description = '`iostat` command streaming parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@ -196,7 +196,7 @@ def parse(data, raw=False, quiet=False, ignore_exceptions=False):
|
||||
output_line = {}
|
||||
|
||||
# ignore blank lines and header line
|
||||
if line == '\n' or line == '' or line.startswith('Linux'):
|
||||
if not line.strip() or line.startswith('Linux'):
|
||||
continue
|
||||
|
||||
if line.startswith('avg-cpu:'):
|
||||
|
@ -560,7 +560,7 @@ def _process(proc_data: Dict) -> Dict:
|
||||
|
||||
def _b2a(byte_string: bytes) -> str:
|
||||
"""Convert a byte string to a colon-delimited hex ascii string"""
|
||||
# need try/except since seperator was only introduced in python 3.8.
|
||||
# need try/except since separator was only introduced in python 3.8.
|
||||
# provides compatibility for python 3.6 and 3.7.
|
||||
try:
|
||||
return binascii.hexlify(byte_string, ':').decode('utf-8')
|
||||
|
@ -180,7 +180,7 @@ def _post_parse(data):
|
||||
ssid = {k: v for k, v in ssid.items() if v}
|
||||
cleandata.append(ssid)
|
||||
|
||||
# remove asterisks from begining of values
|
||||
# remove asterisks from beginning of values
|
||||
for ssid in cleandata:
|
||||
for key in ssid:
|
||||
if ssid[key].startswith('*'):
|
||||
|
@ -78,7 +78,7 @@ def _process(proc_data: Dict) -> Dict:
|
||||
|
||||
def _b2a(byte_string: bytes) -> str:
|
||||
"""Convert a byte string to a colon-delimited hex ascii string"""
|
||||
# need try/except since seperator was only introduced in python 3.8.
|
||||
# need try/except since separator was only introduced in python 3.8.
|
||||
# provides compatibility for python 3.6 and 3.7.
|
||||
try:
|
||||
return binascii.hexlify(byte_string, ':').decode('utf-8')
|
||||
|
@ -77,7 +77,7 @@ from jc.exceptions import ParseError
|
||||
|
||||
class info():
|
||||
"""Provides parser metadata (version, author, etc.)"""
|
||||
version = '1.1'
|
||||
version = '1.2'
|
||||
description = '`ls` command streaming parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@ -148,7 +148,7 @@ def parse(data, raw=False, quiet=False, ignore_exceptions=False):
|
||||
continue
|
||||
|
||||
# skip blank lines
|
||||
if line.strip() == '':
|
||||
if not line.strip():
|
||||
continue
|
||||
|
||||
# Look for parent line if glob or -R is used
|
||||
|
@ -158,7 +158,7 @@ def _process(proc_data: List[JSONDictType]) -> List[JSONDictType]:
|
||||
for key, val in item.items():
|
||||
output[key] = val
|
||||
if key in int_list:
|
||||
output[key + '_int'] = int(val, 16) # type: ignore
|
||||
output[key + '_int'] = int(val, 16)
|
||||
new_list.append(output)
|
||||
|
||||
return new_list
|
||||
|
@ -228,7 +228,7 @@ def _normalize_header(keyname: str) -> str:
|
||||
def _add_text_kv(key: str, value: Optional[str]) -> Optional[Dict]:
|
||||
"""
|
||||
Add keys with _text suffix if there is a text description inside
|
||||
paranthesis at the end of a value. The value of the _text field will
|
||||
parenthesis at the end of a value. The value of the _text field will
|
||||
only be the text inside the parenthesis. This allows cleanup of the
|
||||
original field (convert to int/float/etc) without losing information.
|
||||
"""
|
||||
|
351
jc/parsers/openvpn.py
Normal file
351
jc/parsers/openvpn.py
Normal file
@ -0,0 +1,351 @@
|
||||
"""jc - JSON Convert openvpn-status.log file parser
|
||||
|
||||
The `*_epoch` calculated timestamp fields are naive. (i.e. based on
|
||||
the local time of the system the parser is run on)
|
||||
|
||||
Usage (cli):
|
||||
|
||||
$ cat openvpn-status.log | jc --openvpn
|
||||
|
||||
Usage (module):
|
||||
|
||||
import jc
|
||||
result = jc.parse('openvpn', openvpn_status_log_file_output)
|
||||
|
||||
Schema:
|
||||
|
||||
{
|
||||
"clients": [
|
||||
{
|
||||
"common_name": string,
|
||||
"real_address": string,
|
||||
"real_address_prefix": integer, # [0]
|
||||
"real_address_port": integer, # [0]
|
||||
"bytes_received": integer,
|
||||
"bytes_sent": integer,
|
||||
"connected_since": string,
|
||||
"connected_since_epoch": integer,
|
||||
"updated": string,
|
||||
"updated_epoch": integer,
|
||||
}
|
||||
],
|
||||
"routing_table": [
|
||||
{
|
||||
"virtual_address": string,
|
||||
"virtual_address_prefix": integer, # [0]
|
||||
"virtual_address_port": integer, # [0]
|
||||
"common_name": string,
|
||||
"real_address": string,
|
||||
"real_address_prefix": integer, # [0]
|
||||
"real_address_port": integer, # [0]
|
||||
"last_reference": string,
|
||||
"last_reference_epoch": integer,
|
||||
}
|
||||
],
|
||||
"global_stats": {
|
||||
"max_bcast_mcast_queue_len": integer
|
||||
}
|
||||
}
|
||||
|
||||
[0] null/None if not found
|
||||
|
||||
Examples:
|
||||
|
||||
$ cat openvpn-status.log | jc --openvpn -p
|
||||
{
|
||||
"clients": [
|
||||
{
|
||||
"common_name": "foo@example.com",
|
||||
"real_address": "10.10.10.10",
|
||||
"bytes_received": 334948,
|
||||
"bytes_sent": 1973012,
|
||||
"connected_since": "Thu Jun 18 04:23:03 2015",
|
||||
"updated": "Thu Jun 18 08:12:15 2015",
|
||||
"real_address_prefix": null,
|
||||
"real_address_port": 49502,
|
||||
"connected_since_epoch": 1434626583,
|
||||
"updated_epoch": 1434640335
|
||||
},
|
||||
{
|
||||
"common_name": "foo@example.com",
|
||||
"real_address": "10.10.10.10",
|
||||
"bytes_received": 334948,
|
||||
"bytes_sent": 1973012,
|
||||
"connected_since": "Thu Jun 18 04:23:03 2015",
|
||||
"updated": "Thu Jun 18 08:12:15 2015",
|
||||
"real_address_prefix": null,
|
||||
"real_address_port": 49503,
|
||||
"connected_since_epoch": 1434626583,
|
||||
"updated_epoch": 1434640335
|
||||
}
|
||||
],
|
||||
"routing_table": [
|
||||
{
|
||||
"virtual_address": "192.168.255.118",
|
||||
"common_name": "baz@example.com",
|
||||
"real_address": "10.10.10.10",
|
||||
"last_reference": "Thu Jun 18 08:12:09 2015",
|
||||
"virtual_address_prefix": null,
|
||||
"virtual_address_port": null,
|
||||
"real_address_prefix": null,
|
||||
"real_address_port": 63414,
|
||||
"last_reference_epoch": 1434640329
|
||||
},
|
||||
{
|
||||
"virtual_address": "10.200.0.0",
|
||||
"common_name": "baz@example.com",
|
||||
"real_address": "10.10.10.10",
|
||||
"last_reference": "Thu Jun 18 08:12:09 2015",
|
||||
"virtual_address_prefix": 16,
|
||||
"virtual_address_port": null,
|
||||
"real_address_prefix": null,
|
||||
"real_address_port": 63414,
|
||||
"last_reference_epoch": 1434640329
|
||||
}
|
||||
],
|
||||
"global_stats": {
|
||||
"max_bcast_mcast_queue_len": 0
|
||||
}
|
||||
}
|
||||
|
||||
$ cat openvpn-status.log | jc --openvpn -p -r
|
||||
{
|
||||
"clients": [
|
||||
{
|
||||
"common_name": "foo@example.com",
|
||||
"real_address": "10.10.10.10:49502",
|
||||
"bytes_received": "334948",
|
||||
"bytes_sent": "1973012",
|
||||
"connected_since": "Thu Jun 18 04:23:03 2015",
|
||||
"updated": "Thu Jun 18 08:12:15 2015"
|
||||
},
|
||||
{
|
||||
"common_name": "foo@example.com",
|
||||
"real_address": "10.10.10.10:49503",
|
||||
"bytes_received": "334948",
|
||||
"bytes_sent": "1973012",
|
||||
"connected_since": "Thu Jun 18 04:23:03 2015",
|
||||
"updated": "Thu Jun 18 08:12:15 2015"
|
||||
}
|
||||
],
|
||||
"routing_table": [
|
||||
{
|
||||
"virtual_address": "192.168.255.118",
|
||||
"common_name": "baz@example.com",
|
||||
"real_address": "10.10.10.10:63414",
|
||||
"last_reference": "Thu Jun 18 08:12:09 2015"
|
||||
},
|
||||
{
|
||||
"virtual_address": "10.200.0.0/16",
|
||||
"common_name": "baz@example.com",
|
||||
"real_address": "10.10.10.10:63414",
|
||||
"last_reference": "Thu Jun 18 08:12:09 2015"
|
||||
}
|
||||
],
|
||||
"global_stats": {
|
||||
"max_bcast_mcast_queue_len": "0"
|
||||
}
|
||||
}
|
||||
"""
|
||||
import re
|
||||
import ipaddress
|
||||
from typing import List, Dict, Tuple
|
||||
from jc.jc_types import JSONDictType
|
||||
import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
"""Provides parser metadata (version, author, etc.)"""
|
||||
version = '1.0'
|
||||
description = 'openvpn-status.log file parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
compatible = ['linux', 'darwin', 'cygwin', 'win32', 'aix', 'freebsd']
|
||||
|
||||
|
||||
__version__ = info.version
|
||||
|
||||
|
||||
def _split_addr(addr_str: str) -> Tuple:
|
||||
"""Check the type of address (v4, v6, mac) and split out the address,
|
||||
prefix, and port. Values are None if they don't exist."""
|
||||
address = possible_addr = prefix = port = possible_port = None
|
||||
|
||||
try:
|
||||
address, prefix = addr_str.rsplit('/', maxsplit=1)
|
||||
except Exception:
|
||||
address = addr_str
|
||||
|
||||
# is this a mac address? then stop
|
||||
if re.match(r'(?:\S\S\:){5}\S\S', address):
|
||||
return address, prefix, port
|
||||
|
||||
# is it an ipv4 with port or just ipv6?
|
||||
if ':' in address:
|
||||
try:
|
||||
possible_addr, possible_port = address.rsplit(':', maxsplit=1)
|
||||
_ = ipaddress.IPv4Address(possible_addr)
|
||||
address = possible_addr
|
||||
port = possible_port
|
||||
# assume it was an IPv6 address
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return address, prefix, port
|
||||
|
||||
|
||||
def _process(proc_data: JSONDictType) -> JSONDictType:
|
||||
"""
|
||||
Final processing to conform to the schema.
|
||||
|
||||
Parameters:
|
||||
|
||||
proc_data: (Dictionary) raw structured data to process
|
||||
|
||||
Returns:
|
||||
|
||||
Dictionary. Structured to conform to the schema.
|
||||
"""
|
||||
int_list = {'bytes_received', 'bytes_sent', 'max_bcast_mcast_queue_len'}
|
||||
date_fields = {'connected_since', 'updated', 'last_reference'}
|
||||
addr_fields = {'real_address', 'virtual_address'}
|
||||
|
||||
if 'clients' in proc_data:
|
||||
for item in proc_data['clients']:
|
||||
for k, v in item.copy().items():
|
||||
if k in int_list:
|
||||
item[k] = jc.utils.convert_to_int(v)
|
||||
|
||||
if k in date_fields:
|
||||
dt = jc.utils.timestamp(item[k], format_hint=(1000,))
|
||||
item[k + '_epoch'] = dt.naive
|
||||
|
||||
if k in addr_fields:
|
||||
addr, prefix, port = _split_addr(v)
|
||||
item[k] = addr
|
||||
item[k + '_prefix'] = jc.utils.convert_to_int(prefix)
|
||||
item[k + '_port'] = jc.utils.convert_to_int(port)
|
||||
|
||||
if 'routing_table' in proc_data:
|
||||
for item in proc_data['routing_table']:
|
||||
for k, v in item.copy(). items():
|
||||
if k in date_fields:
|
||||
dt = jc.utils.timestamp(item[k], format_hint=(1000,))
|
||||
item[k + '_epoch'] = dt.naive
|
||||
|
||||
if k in addr_fields:
|
||||
addr, prefix, port = _split_addr(v)
|
||||
item[k] = addr
|
||||
item[k + '_prefix'] = jc.utils.convert_to_int(prefix)
|
||||
item[k + '_port'] = jc.utils.convert_to_int(port)
|
||||
|
||||
if 'global_stats' in proc_data:
|
||||
for k, v in proc_data['global_stats'].items():
|
||||
if k in int_list:
|
||||
if k in int_list:
|
||||
proc_data['global_stats'][k] = jc.utils.convert_to_int(v)
|
||||
|
||||
return proc_data
|
||||
|
||||
|
||||
def parse(
|
||||
data: str,
|
||||
raw: bool = False,
|
||||
quiet: bool = False
|
||||
) -> JSONDictType:
|
||||
"""
|
||||
Main text parsing function
|
||||
|
||||
Parameters:
|
||||
|
||||
data: (string) text data to parse
|
||||
raw: (boolean) unprocessed output if True
|
||||
quiet: (boolean) suppress warning messages if True
|
||||
|
||||
Returns:
|
||||
|
||||
Dictionary. Raw or processed structured data.
|
||||
"""
|
||||
jc.utils.compatibility(__name__, info.compatible, quiet)
|
||||
jc.utils.input_type_check(data)
|
||||
|
||||
raw_output: Dict = {}
|
||||
clients: List[Dict] = []
|
||||
routing_table: List[Dict] = []
|
||||
global_stats: Dict = {}
|
||||
section: str = '' # clients, routing, stats
|
||||
updated: str = ''
|
||||
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
for line in filter(None, data.splitlines()):
|
||||
|
||||
if line.startswith('OpenVPN CLIENT LIST'):
|
||||
section = 'clients'
|
||||
continue
|
||||
|
||||
if line.startswith('ROUTING TABLE'):
|
||||
section = 'routing'
|
||||
continue
|
||||
|
||||
if line.startswith('GLOBAL STATS'):
|
||||
section = 'stats'
|
||||
continue
|
||||
|
||||
if line.startswith('END'):
|
||||
break
|
||||
|
||||
if section == 'clients' and line.startswith('Updated,'):
|
||||
_, updated = line.split(',', maxsplit=1)
|
||||
continue
|
||||
|
||||
if section == 'clients' and line.startswith('Common Name,Real Address,'):
|
||||
continue
|
||||
|
||||
if section == 'clients':
|
||||
c_name, real_addr, r_bytes, s_bytes, connected = line.split(',', maxsplit=5)
|
||||
clients.append(
|
||||
{
|
||||
'common_name': c_name,
|
||||
'real_address': real_addr,
|
||||
'bytes_received': r_bytes,
|
||||
'bytes_sent': s_bytes,
|
||||
'connected_since': connected,
|
||||
'updated': updated
|
||||
}
|
||||
)
|
||||
continue
|
||||
|
||||
if section == 'routing' and line.startswith('Virtual Address,Common Name,'):
|
||||
continue
|
||||
|
||||
if section == 'routing':
|
||||
# Virtual Address,Common Name,Real Address,Last Ref
|
||||
# 192.168.255.118,baz@example.com,10.10.10.10:63414,Thu Jun 18 08:12:09 2015
|
||||
virt_addr, c_name, real_addr, last_ref = line.split(',', maxsplit=4)
|
||||
route = {
|
||||
'virtual_address': virt_addr,
|
||||
'common_name': c_name,
|
||||
'real_address': real_addr,
|
||||
'last_reference': last_ref
|
||||
}
|
||||
|
||||
# fixup for virtual addresses ending in "C"
|
||||
if 'virtual_address' in route:
|
||||
if route['virtual_address'].endswith('C'):
|
||||
route['virtual_address'] = route['virtual_address'][:-1]
|
||||
|
||||
routing_table.append(route)
|
||||
continue
|
||||
|
||||
if section == "stats":
|
||||
if line.startswith('Max bcast/mcast queue length'):
|
||||
global_stats['max_bcast_mcast_queue_len'] = line.split(',', maxsplit=1)[1]
|
||||
continue
|
||||
|
||||
raw_output['clients'] = clients
|
||||
raw_output['routing_table'] = routing_table
|
||||
raw_output['global_stats'] = {}
|
||||
raw_output['global_stats'].update(global_stats)
|
||||
|
||||
return raw_output if raw else _process(raw_output)
|
@ -16,12 +16,15 @@ Usage (module):
|
||||
Schema:
|
||||
|
||||
{
|
||||
'partition': string,
|
||||
'name': string,
|
||||
'short_name': string,
|
||||
'type': string
|
||||
"partition": string,
|
||||
"efi_bootmgr": string, # [0]
|
||||
"name": string,
|
||||
"short_name": string,
|
||||
"type": string
|
||||
}
|
||||
|
||||
[0] only exists if an EFI boot manager is detected
|
||||
|
||||
Examples:
|
||||
|
||||
$ os-prober | jc --os-prober -p
|
||||
@ -39,7 +42,7 @@ import jc.utils
|
||||
|
||||
class info():
|
||||
"""Provides parser metadata (version, author, etc.)"""
|
||||
version = '1.0'
|
||||
version = '1.1'
|
||||
description = '`os-prober` command parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@ -62,6 +65,12 @@ def _process(proc_data: JSONDictType) -> JSONDictType:
|
||||
|
||||
Dictionary. Structured to conform to the schema.
|
||||
"""
|
||||
# check for EFI partition@boot-manager and split/add fields
|
||||
if 'partition' in proc_data and '@' in proc_data['partition']:
|
||||
new_part, efi_bootmgr = proc_data['partition'].split('@', maxsplit=1)
|
||||
proc_data['partition'] = new_part
|
||||
proc_data['efi_bootmgr'] = efi_bootmgr
|
||||
|
||||
return proc_data
|
||||
|
||||
|
||||
|
125
jc/parsers/pgpass.py
Normal file
125
jc/parsers/pgpass.py
Normal file
@ -0,0 +1,125 @@
|
||||
"""jc - JSON Convert PostgreSQL password file parser
|
||||
|
||||
Usage (cli):
|
||||
|
||||
$ cat /var/lib/postgresql/.pgpass | jc --pgpass
|
||||
|
||||
Usage (module):
|
||||
|
||||
import jc
|
||||
result = jc.parse('pgpass', postgres_password_file)
|
||||
|
||||
Schema:
|
||||
|
||||
[
|
||||
{
|
||||
"hostname": string,
|
||||
"port": string,
|
||||
"database": string,
|
||||
"username": string,
|
||||
"password": string
|
||||
}
|
||||
]
|
||||
|
||||
Examples:
|
||||
|
||||
$ cat /var/lib/postgresql/.pgpass | jc --pgpass -p
|
||||
[
|
||||
{
|
||||
"hostname": "dbserver",
|
||||
"port": "*",
|
||||
"database": "db1",
|
||||
"username": "dbuser",
|
||||
"password": "pwd123"
|
||||
},
|
||||
{
|
||||
"hostname": "dbserver2",
|
||||
"port": "8888",
|
||||
"database": "inventory",
|
||||
"username": "joe:user",
|
||||
"password": "abc123"
|
||||
},
|
||||
...
|
||||
]
|
||||
"""
|
||||
from typing import List, Dict
|
||||
from jc.jc_types import JSONDictType
|
||||
import jc.utils
|
||||
|
||||
|
||||
class info():
|
||||
"""Provides parser metadata (version, author, etc.)"""
|
||||
version = '1.0'
|
||||
description = 'PostgreSQL password file parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
compatible = ['linux', 'darwin', 'cygwin', 'win32', 'aix', 'freebsd']
|
||||
|
||||
|
||||
__version__ = info.version
|
||||
|
||||
|
||||
def _process(proc_data: List[JSONDictType]) -> List[JSONDictType]:
|
||||
"""
|
||||
Final processing to conform to the schema.
|
||||
|
||||
Parameters:
|
||||
|
||||
proc_data: (List of Dictionaries) raw structured data to process
|
||||
|
||||
Returns:
|
||||
|
||||
List of Dictionaries. Structured to conform to the schema.
|
||||
"""
|
||||
return proc_data
|
||||
|
||||
|
||||
def parse(
|
||||
data: str,
|
||||
raw: bool = False,
|
||||
quiet: bool = False
|
||||
) -> List[JSONDictType]:
|
||||
"""
|
||||
Main text parsing function
|
||||
|
||||
Parameters:
|
||||
|
||||
data: (string) text data to parse
|
||||
raw: (boolean) unprocessed output if True
|
||||
quiet: (boolean) suppress warning messages if True
|
||||
|
||||
Returns:
|
||||
|
||||
List of Dictionaries. Raw or processed structured data.
|
||||
"""
|
||||
jc.utils.compatibility(__name__, info.compatible, quiet)
|
||||
jc.utils.input_type_check(data)
|
||||
|
||||
raw_output: List[Dict] = []
|
||||
|
||||
if jc.utils.has_data(data):
|
||||
|
||||
for line in filter(None, data.splitlines()):
|
||||
|
||||
# ignore comment lines
|
||||
if line.strip().startswith('#'):
|
||||
continue
|
||||
|
||||
# convert escaped characters (\ and :)
|
||||
line = line.replace(':', '\u2063')
|
||||
line = line.replace('\\\\', '\\')
|
||||
line = line.replace('\\\u2063', ':')
|
||||
|
||||
hostname, port, database, username, password = line.split('\u2063')
|
||||
|
||||
raw_output.append(
|
||||
{
|
||||
'hostname': hostname,
|
||||
'port': port,
|
||||
'database': database,
|
||||
'username': username,
|
||||
'password': password
|
||||
}
|
||||
)
|
||||
|
||||
return raw_output if raw else _process(raw_output)
|
@ -161,7 +161,7 @@ def parse(
|
||||
continue
|
||||
|
||||
if not line.startswith('#') and not found_first_hash:
|
||||
# skip preample lines before header row
|
||||
# skip preamble lines before header row
|
||||
continue
|
||||
|
||||
if line.startswith('#') and not found_first_hash:
|
||||
|
@ -85,7 +85,7 @@ from jc.exceptions import ParseError
|
||||
|
||||
class info():
|
||||
"""Provides parser metadata (version, author, etc.)"""
|
||||
version = '1.1'
|
||||
version = '1.2'
|
||||
description = '`ping` and `ping6` command streaming parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@ -492,7 +492,7 @@ def parse(data, raw=False, quiet=False, ignore_exceptions=False):
|
||||
output_line = {}
|
||||
|
||||
# skip blank lines
|
||||
if line.strip() == '':
|
||||
if not line.strip():
|
||||
continue
|
||||
|
||||
# skip warning lines
|
||||
|
@ -80,7 +80,7 @@ def _process(proc_data: Dict) -> Dict:
|
||||
|
||||
def _b2a(byte_string: bytes) -> str:
|
||||
"""Convert a byte string to a colon-delimited hex ascii string"""
|
||||
# need try/except since seperator was only introduced in python 3.8.
|
||||
# need try/except since separator was only introduced in python 3.8.
|
||||
# provides compatibility for python 3.6 and 3.7.
|
||||
try:
|
||||
return binascii.hexlify(byte_string, ':').decode('utf-8')
|
||||
|
@ -1,7 +1,7 @@
|
||||
"""jc - JSON Convert Proc file output parser
|
||||
|
||||
This parser automatically identifies the Proc file and calls the
|
||||
corresponding parser to peform the parsing.
|
||||
corresponding parser to perform the parsing.
|
||||
|
||||
Magic syntax for converting `/proc` files is also supported by running
|
||||
`jc /proc/<path to file>`. Any `jc` options must be specified before the
|
||||
|
@ -88,7 +88,7 @@ from jc.streaming import (
|
||||
|
||||
class info():
|
||||
"""Provides parser metadata (version, author, etc.)"""
|
||||
version = '1.1'
|
||||
version = '1.2'
|
||||
description = '`rsync` command streaming parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@ -267,7 +267,7 @@ def parse(
|
||||
output_line: Dict = {}
|
||||
|
||||
# ignore blank lines
|
||||
if line == '':
|
||||
if not line.strip():
|
||||
continue
|
||||
|
||||
file_line = file_line_re.match(line)
|
||||
|
@ -78,7 +78,7 @@ def _process(proc_data: JSONDictType) -> JSONDictType:
|
||||
|
||||
for item in int_list:
|
||||
if item in proc_data:
|
||||
proc_data[item] = int(proc_data[item]) # type: ignore
|
||||
proc_data[item] = int(proc_data[item])
|
||||
|
||||
return proc_data
|
||||
|
||||
|
@ -156,7 +156,7 @@ Schema:
|
||||
|
||||
Examples:
|
||||
|
||||
$ sshd -T | jc --sshd_conf -p
|
||||
$ sshd -T | jc --sshd-conf -p
|
||||
{
|
||||
"acceptenv": [
|
||||
"LANG",
|
||||
@ -371,7 +371,7 @@ Examples:
|
||||
"subsystem_command": "/usr/lib/openssh/sftp-server"
|
||||
}
|
||||
|
||||
$ sshd -T | jc --sshd_conf -p -r
|
||||
$ sshd -T | jc --sshd-conf -p -r
|
||||
{
|
||||
"acceptenv": [
|
||||
"LANG",
|
||||
@ -526,7 +526,7 @@ def _process(proc_data: JSONDictType) -> JSONDictType:
|
||||
# this is a list value
|
||||
if key == 'acceptenv':
|
||||
new_list: List[str] = []
|
||||
for item in val: # type: ignore
|
||||
for item in val:
|
||||
new_list.extend(item.split())
|
||||
proc_data[key] = new_list
|
||||
continue
|
||||
@ -534,13 +534,13 @@ def _process(proc_data: JSONDictType) -> JSONDictType:
|
||||
# this is a list value
|
||||
if key == 'include':
|
||||
new_list = []
|
||||
for item in val: # type: ignore
|
||||
for item in val:
|
||||
new_list.extend(item.split())
|
||||
proc_data[key] = new_list
|
||||
continue
|
||||
|
||||
if key == 'maxstartups':
|
||||
maxstart_split = val.split(':', maxsplit=2) # type: ignore
|
||||
maxstart_split = val.split(':', maxsplit=2)
|
||||
proc_data[key] = maxstart_split[0]
|
||||
if len(maxstart_split) > 1:
|
||||
proc_data[key + '_rate'] = maxstart_split[1]
|
||||
@ -550,31 +550,31 @@ def _process(proc_data: JSONDictType) -> JSONDictType:
|
||||
|
||||
if key == 'port':
|
||||
port_list: List[int] = []
|
||||
for item in val: # type: ignore
|
||||
for item in val:
|
||||
port_list.append(int(item))
|
||||
proc_data[key] = port_list
|
||||
continue
|
||||
|
||||
if key == 'rekeylimit':
|
||||
rekey_split = val.split(maxsplit=1) # type: ignore
|
||||
rekey_split = val.split(maxsplit=1)
|
||||
proc_data[key] = rekey_split[0]
|
||||
if len(rekey_split) > 1:
|
||||
proc_data[key + '_time'] = rekey_split[1]
|
||||
continue
|
||||
|
||||
if key == 'subsystem':
|
||||
sub_split = val.split(maxsplit=1) # type: ignore
|
||||
sub_split = val.split(maxsplit=1)
|
||||
proc_data[key] = sub_split[0]
|
||||
if len(sub_split) > 1:
|
||||
proc_data[key + '_command'] = sub_split[1]
|
||||
continue
|
||||
|
||||
if key in split_fields_space:
|
||||
proc_data[key] = val.split() # type: ignore
|
||||
proc_data[key] = val.split()
|
||||
continue
|
||||
|
||||
if key in split_fields_comma:
|
||||
proc_data[key] = val.split(',') # type: ignore
|
||||
proc_data[key] = val.split(',')
|
||||
continue
|
||||
|
||||
for key, val in proc_data.items():
|
||||
|
@ -84,7 +84,7 @@ from jc.exceptions import ParseError
|
||||
|
||||
class info():
|
||||
"""Provides parser metadata (version, author, etc.)"""
|
||||
version = '1.2'
|
||||
version = '1.3'
|
||||
description = '`stat` command streaming parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@ -165,7 +165,7 @@ def parse(
|
||||
line = line.rstrip()
|
||||
|
||||
# ignore blank lines
|
||||
if line == '':
|
||||
if not line.strip():
|
||||
continue
|
||||
|
||||
# linux output
|
||||
|
@ -48,7 +48,7 @@ Blank values converted to `null`/`None`.
|
||||
]
|
||||
|
||||
[0] naive timestamp if "timestamp" field is parsable, else null
|
||||
[1] timezone aware timestamp availabe for UTC, else null
|
||||
[1] timezone aware timestamp available for UTC, else null
|
||||
[2] this field exists if the syslog line is not parsable. The value
|
||||
is the original syslog line.
|
||||
|
||||
|
@ -59,7 +59,7 @@ Blank values converted to `null`/`None`.
|
||||
}
|
||||
|
||||
[0] naive timestamp if "timestamp" field is parsable, else null
|
||||
[1] timezone aware timestamp availabe for UTC, else null
|
||||
[1] timezone aware timestamp available for UTC, else null
|
||||
[2] this field exists if the syslog line is not parsable. The value
|
||||
is the original syslog line.
|
||||
|
||||
|
@ -143,7 +143,7 @@ def _process(proc_data: JSONDictType) -> JSONDictType:
|
||||
List of Dictionaries. Structured to conform to the schema.
|
||||
"""
|
||||
if 'L' in proc_data:
|
||||
proc_data['L'] = int(proc_data['L']) # type: ignore
|
||||
proc_data['L'] = int(proc_data['L'])
|
||||
|
||||
return proc_data
|
||||
|
||||
|
@ -100,7 +100,7 @@ from jc.exceptions import ParseError
|
||||
|
||||
class info():
|
||||
"""Provides parser metadata (version, author, etc.)"""
|
||||
version = '1.1'
|
||||
version = '1.2'
|
||||
description = '`vmstat` command streaming parser'
|
||||
author = 'Kelly Brazil'
|
||||
author_email = 'kellyjonbrazil@gmail.com'
|
||||
@ -177,7 +177,7 @@ def parse(data, raw=False, quiet=False, ignore_exceptions=False):
|
||||
output_line = {}
|
||||
|
||||
# skip blank lines
|
||||
if line.strip() == '':
|
||||
if not line.strip():
|
||||
continue
|
||||
|
||||
# detect output type
|
||||
|
@ -441,7 +441,7 @@ def _i2b(integer: int) -> bytes:
|
||||
|
||||
def _b2a(byte_string: bytes) -> str:
|
||||
"""Convert a byte string to a colon-delimited hex ascii string"""
|
||||
# need try/except since seperator was only introduced in python 3.8.
|
||||
# need try/except since separator was only introduced in python 3.8.
|
||||
# provides compatibility for python 3.6 and 3.7.
|
||||
try:
|
||||
return binascii.hexlify(byte_string, ':').decode('utf-8')
|
||||
|
@ -2,7 +2,7 @@
|
||||
|
||||
from functools import wraps
|
||||
from typing import Dict, Tuple, Union, Iterable, Callable, TypeVar, cast, Any
|
||||
from .jc_types import JSONDictType, MetadataType
|
||||
from .jc_types import JSONDictType
|
||||
|
||||
|
||||
F = TypeVar('F', bound=Callable[..., Any])
|
||||
@ -31,7 +31,7 @@ def stream_success(output_line: JSONDictType, ignore_exceptions: bool) -> JSONDi
|
||||
return output_line
|
||||
|
||||
|
||||
def stream_error(e: BaseException, line: str) -> Dict[str, MetadataType]:
|
||||
def stream_error(e: BaseException, line: str) -> JSONDictType:
|
||||
"""
|
||||
Return an error `_jc_meta` field.
|
||||
"""
|
||||
|
41
jc/utils.py
41
jc/utils.py
@ -6,7 +6,7 @@ import shutil
|
||||
from datetime import datetime, timezone
|
||||
from textwrap import TextWrapper
|
||||
from functools import lru_cache
|
||||
from typing import List, Dict, Iterable, Union, Optional, TextIO
|
||||
from typing import Any, List, Dict, Iterable, Union, Optional, TextIO
|
||||
from .jc_types import TimeStampFormatType
|
||||
|
||||
|
||||
@ -141,7 +141,7 @@ def compatibility(mod_name: str, compatible: List[str], quiet: bool = False) ->
|
||||
the parser. compatible options:
|
||||
linux, darwin, cygwin, win32, aix, freebsd
|
||||
|
||||
quiet: (bool) supress compatibility message if True
|
||||
quiet: (bool) suppress compatibility message if True
|
||||
|
||||
Returns:
|
||||
|
||||
@ -280,7 +280,7 @@ def input_type_check(data: object) -> None:
|
||||
|
||||
|
||||
class timestamp:
|
||||
__slots__ = ('string', 'format', 'naive', 'utc')
|
||||
__slots__ = ('string', 'format', 'naive', 'utc', 'iso')
|
||||
|
||||
def __init__(self,
|
||||
datetime_string: Optional[str],
|
||||
@ -314,6 +314,9 @@ class timestamp:
|
||||
|
||||
utc (int | None): aware timestamp only if UTC timezone
|
||||
detected in datetime string. None if conversion fails.
|
||||
|
||||
iso (str | None): ISO string - timezone information is output
|
||||
only if UTC timezone is detected in the datetime string.
|
||||
"""
|
||||
self.string = datetime_string
|
||||
|
||||
@ -326,16 +329,17 @@ class timestamp:
|
||||
self.format = dt['format']
|
||||
self.naive = dt['timestamp_naive']
|
||||
self.utc = dt['timestamp_utc']
|
||||
self.iso = dt['iso']
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return f'timestamp(string={self.string!r}, format={self.format}, naive={self.naive}, utc={self.utc})'
|
||||
return f'timestamp(string={self.string!r}, format={self.format}, naive={self.naive}, utc={self.utc}, iso={self.iso!r})'
|
||||
|
||||
@staticmethod
|
||||
@lru_cache(maxsize=512)
|
||||
@lru_cache(maxsize=2048)
|
||||
def _parse_dt(
|
||||
dt_string: Optional[str],
|
||||
format_hint: Optional[Iterable[int]] = None
|
||||
) -> Dict[str, Optional[int]]:
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Input a datetime text string of several formats and convert to
|
||||
a naive or timezone-aware epoch timestamp in UTC.
|
||||
@ -366,6 +370,9 @@ class timestamp:
|
||||
# aware timestamp only if UTC timezone detected.
|
||||
# None if conversion fails.
|
||||
"timestamp_utc": int
|
||||
|
||||
# ISO string. None if conversion fails.
|
||||
"iso": str
|
||||
}
|
||||
|
||||
The `format` integer denotes which date_time format
|
||||
@ -380,6 +387,9 @@ class timestamp:
|
||||
timezone is not found in the date-time string), then this
|
||||
field will be None.
|
||||
|
||||
The `iso` string will only have timezone information if the
|
||||
UTC timezone is detected in `dt_string`.
|
||||
|
||||
If the conversion completely fails, all fields will be None.
|
||||
"""
|
||||
formats: tuple[TimeStampFormatType, ...] = (
|
||||
@ -396,6 +406,9 @@ class timestamp:
|
||||
{'id': 1700, 'format': '%m/%d/%Y, %I:%M:%S %p', 'locale': None}, # Windows english format wint non-UTC tz (found in systeminfo cli output): 3/22/2021, 1:15:51 PM (UTC-0600)
|
||||
{'id': 1705, 'format': '%m/%d/%Y, %I:%M:%S %p %Z', 'locale': None}, # Windows english format with UTC tz (found in systeminfo cli output): 3/22/2021, 1:15:51 PM (UTC)
|
||||
{'id': 1710, 'format': '%m/%d/%Y, %I:%M:%S %p UTC%z', 'locale': None}, # Windows english format with UTC tz (found in systeminfo cli output): 3/22/2021, 1:15:51 PM (UTC+0000)
|
||||
{'id': 1750, 'format': '%Y/%m/%d-%H:%M:%S.%f', 'locale': None}, # Google Big Table format with no timezone: 1970/01/01-01:00:00.000000
|
||||
{'id': 1755, 'format': '%Y/%m/%d-%H:%M:%S.%f%z', 'locale': None}, # Google Big Table format with timezone: 1970/01/01-01:00:00.000000+00:00
|
||||
{'id': 1800, 'format': '%d/%b/%Y:%H:%M:%S %z', 'locale': None}, # Common Log Format: 10/Oct/2000:13:55:36 -0700
|
||||
{'id': 2000, 'format': '%a %d %b %Y %I:%M:%S %p %Z', 'locale': None}, # en_US.UTF-8 local format (found in upower cli output): Tue 23 Mar 2021 04:12:11 PM UTC
|
||||
{'id': 3000, 'format': '%a %d %b %Y %I:%M:%S %p', 'locale': None}, # en_US.UTF-8 local format with non-UTC tz (found in upower cli output): Tue 23 Mar 2021 04:12:11 PM IST
|
||||
{'id': 4000, 'format': '%A %d %B %Y %I:%M:%S %p %Z', 'locale': None}, # European-style local format (found in upower cli output): Tuesday 01 October 2019 12:50:41 PM UTC
|
||||
@ -461,10 +474,12 @@ class timestamp:
|
||||
dt_utc: Optional[datetime] = None
|
||||
timestamp_naive: Optional[int] = None
|
||||
timestamp_utc: Optional[int] = None
|
||||
timestamp_obj: Dict[str, Optional[int]] = {
|
||||
iso_string: Optional[str] = None
|
||||
timestamp_obj: Dict[str, Any] = {
|
||||
'format': None,
|
||||
'timestamp_naive': None,
|
||||
'timestamp_utc': None
|
||||
'timestamp_utc': None,
|
||||
'iso': None
|
||||
}
|
||||
|
||||
# convert format_hint to a tuple so it is hashable (for lru_cache)
|
||||
@ -484,7 +499,10 @@ class timestamp:
|
||||
if 'UTC+' in data or 'UTC-' in data:
|
||||
utc_tz = bool('UTC+0000' in data or 'UTC-0000' in data)
|
||||
|
||||
elif '+0000' in data or '-0000' in data:
|
||||
elif '+0000' in data \
|
||||
or '-0000' in data \
|
||||
or '+00:00' in data \
|
||||
or '-00:00' in data:
|
||||
utc_tz = True
|
||||
|
||||
# normalize the timezone by taking out any timezone reference, except UTC
|
||||
@ -520,8 +538,9 @@ class timestamp:
|
||||
try:
|
||||
locale.setlocale(locale.LC_TIME, fmt['locale'])
|
||||
dt = datetime.strptime(normalized_datetime, fmt['format'])
|
||||
timestamp_naive = int(dt.replace(tzinfo=None).timestamp())
|
||||
timestamp_obj['format'] = fmt['id']
|
||||
timestamp_naive = int(dt.replace(tzinfo=None).timestamp())
|
||||
iso_string = dt.replace(tzinfo=None).isoformat()
|
||||
locale.setlocale(locale.LC_TIME, None)
|
||||
break
|
||||
except Exception:
|
||||
@ -531,9 +550,11 @@ class timestamp:
|
||||
if dt and utc_tz:
|
||||
dt_utc = dt.replace(tzinfo=timezone.utc)
|
||||
timestamp_utc = int(dt_utc.timestamp())
|
||||
iso_string = dt_utc.isoformat()
|
||||
|
||||
if timestamp_naive:
|
||||
timestamp_obj['timestamp_naive'] = timestamp_naive
|
||||
timestamp_obj['timestamp_utc'] = timestamp_utc
|
||||
timestamp_obj['iso'] = iso_string
|
||||
|
||||
return timestamp_obj
|
||||
|
29
man/jc.1
29
man/jc.1
@ -1,4 +1,4 @@
|
||||
.TH jc 1 2022-11-08 1.22.2 "JSON Convert"
|
||||
.TH jc 1 2022-12-16 1.22.3 "JSON Convert"
|
||||
.SH NAME
|
||||
\fBjc\fP \- JSON Convert JSONifies the output of many CLI tools, file-types, and strings
|
||||
.SH SYNOPSIS
|
||||
@ -65,6 +65,11 @@ multi-line ASCII and Unicode table parser
|
||||
\fB--blkid\fP
|
||||
`blkid` command parser
|
||||
|
||||
.TP
|
||||
.B
|
||||
\fB--cbt\fP
|
||||
`cbt` (Google Bigtable) command parser
|
||||
|
||||
.TP
|
||||
.B
|
||||
\fB--cef\fP
|
||||
@ -85,6 +90,16 @@ CEF string streaming parser
|
||||
\fB--cksum\fP
|
||||
`cksum` and `sum` command parser
|
||||
|
||||
.TP
|
||||
.B
|
||||
\fB--clf\fP
|
||||
Common and Combined Log Format file parser
|
||||
|
||||
.TP
|
||||
.B
|
||||
\fB--clf-s\fP
|
||||
Common and Combined Log Format file streaming parser
|
||||
|
||||
.TP
|
||||
.B
|
||||
\fB--crontab\fP
|
||||
@ -380,6 +395,11 @@ M3U and M3U8 file parser
|
||||
\fB--ntpq\fP
|
||||
`ntpq -p` command parser
|
||||
|
||||
.TP
|
||||
.B
|
||||
\fB--openvpn\fP
|
||||
openvpn-status.log file parser
|
||||
|
||||
.TP
|
||||
.B
|
||||
\fB--os-prober\fP
|
||||
@ -395,6 +415,11 @@ M3U and M3U8 file parser
|
||||
\fB--pci-ids\fP
|
||||
`pci.ids` file parser
|
||||
|
||||
.TP
|
||||
.B
|
||||
\fB--pgpass\fP
|
||||
PostgreSQL password file parser
|
||||
|
||||
.TP
|
||||
.B
|
||||
\fB--pidstat\fP
|
||||
@ -1072,7 +1097,7 @@ JC_COLORS=default,default,default,default
|
||||
You can set the \fBNO_COLOR\fP environment variable to any value to disable color output in \fBjc\fP. Note that using the \fB-C\fP option to force color output will override both the \fBNO_COLOR\fP environment variable and the \fB-m\fP option.
|
||||
|
||||
.SH STREAMING PARSERS
|
||||
Most parsers load all of the data from \fBSTDIN\fP, parse it, then output the entire JSON document serially. There are some streaming parsers (e.g. \fBls-s\fP, \fBping-s\fP, etc.) that immediately start processing and outputing the data line-by-line as JSON Lines (aka NDJSON) while it is being received from \fBSTDIN\fP. This can significantly reduce the amount of memory required to parse large amounts of command output (e.g. \fBls -lR /\fP) and can sometimes process the data more quickly. Streaming parsers have slightly different behavior than standard parsers as outlined below.
|
||||
Most parsers load all of the data from \fBSTDIN\fP, parse it, then output the entire JSON document serially. There are some streaming parsers (e.g. \fBls-s\fP, \fBping-s\fP, etc.) that immediately start processing and outputting the data line-by-line as JSON Lines (aka NDJSON) while it is being received from \fBSTDIN\fP. This can significantly reduce the amount of memory required to parse large amounts of command output (e.g. \fBls -lR /\fP) and can sometimes process the data more quickly. Streaming parsers have slightly different behavior than standard parsers as outlined below.
|
||||
|
||||
.RS
|
||||
Note: Streaming parsers cannot be used with the "magic" syntax
|
||||
|
@ -1,5 +1,5 @@
|
||||
#!/usr/bin/env python3
|
||||
# Genereate man page from jc metadata using jinja2 templates
|
||||
# Generate man page from jc metadata using jinja2 templates
|
||||
from datetime import date
|
||||
import jc.cli
|
||||
from jinja2 import Environment, FileSystemLoader
|
||||
|
@ -1,5 +1,5 @@
|
||||
#!/usr/bin/env python3
|
||||
# Genereate README.md from jc metadata using jinja2 templates
|
||||
# Generate README.md from jc metadata using jinja2 templates
|
||||
import jc.cli
|
||||
import jc.lib
|
||||
from jinja2 import Environment, FileSystemLoader
|
||||
|
2
setup.py
2
setup.py
@ -5,7 +5,7 @@ with open('README.md', 'r') as f:
|
||||
|
||||
setuptools.setup(
|
||||
name='jc',
|
||||
version='1.22.2',
|
||||
version='1.22.3',
|
||||
author='Kelly Brazil',
|
||||
author_email='kellyjonbrazil@gmail.com',
|
||||
description='Converts the output of popular command-line tools and file-types to JSON.',
|
||||
|
@ -182,7 +182,7 @@ JC_COLORS=default,default,default,default
|
||||
You can set the \fBNO_COLOR\fP environment variable to any value to disable color output in \fBjc\fP. Note that using the \fB-C\fP option to force color output will override both the \fBNO_COLOR\fP environment variable and the \fB-m\fP option.
|
||||
|
||||
.SH STREAMING PARSERS
|
||||
Most parsers load all of the data from \fBSTDIN\fP, parse it, then output the entire JSON document serially. There are some streaming parsers (e.g. \fBls-s\fP, \fBping-s\fP, etc.) that immediately start processing and outputing the data line-by-line as JSON Lines (aka NDJSON) while it is being received from \fBSTDIN\fP. This can significantly reduce the amount of memory required to parse large amounts of command output (e.g. \fBls -lR /\fP) and can sometimes process the data more quickly. Streaming parsers have slightly different behavior than standard parsers as outlined below.
|
||||
Most parsers load all of the data from \fBSTDIN\fP, parse it, then output the entire JSON document serially. There are some streaming parsers (e.g. \fBls-s\fP, \fBping-s\fP, etc.) that immediately start processing and outputting the data line-by-line as JSON Lines (aka NDJSON) while it is being received from \fBSTDIN\fP. This can significantly reduce the amount of memory required to parse large amounts of command output (e.g. \fBls -lR /\fP) and can sometimes process the data more quickly. Streaming parsers have slightly different behavior than standard parsers as outlined below.
|
||||
|
||||
.RS
|
||||
Note: Streaming parsers cannot be used with the "magic" syntax
|
||||
|
@ -262,7 +262,7 @@ option.
|
||||
### Streaming Parsers
|
||||
Most parsers load all of the data from `STDIN`, parse it, then output the entire
|
||||
JSON document serially. There are some streaming parsers (e.g. `ls-s` and
|
||||
`ping-s`) that immediately start processing and outputing the data line-by-line
|
||||
`ping-s`) that immediately start processing and outputting the data line-by-line
|
||||
as [JSON Lines](https://jsonlines.org/) (aka [NDJSON](http://ndjson.org/)) while
|
||||
it is being received from `STDIN`. This can significantly reduce the amount of
|
||||
memory required to parse large amounts of command output (e.g. `ls -lR /`) and
|
||||
|
@ -1 +1 @@
|
||||
[{"name":"cxl3","flags":8843,"state":["UP","BROADCAST","RUNNING","SIMPLEX","MULTICAST"],"mtu":1500,"type":null,"mac_addr":"00:07:43:3d:b7:70","ipv4_addr":null,"ipv4_mask":null,"ipv4_bcast":null,"ipv6_addr":null,"ipv6_mask":null,"ipv6_scope":null,"ipv6_type":null,"metric":0,"rx_packets":null,"rx_errors":null,"rx_dropped":null,"rx_overruns":null,"rx_frame":null,"tx_packets":null,"tx_errors":null,"tx_dropped":null,"tx_overruns":null,"tx_carrier":null,"tx_collisions":null,"rx_bytes":null,"tx_bytes":null,"options":"6ec07bb","options_flags":["RXCSUM","TXCSUM","VLAN_MTU","VLAN_HWTAGGING","JUMBO_MTU","VLAN_HWCSUM","TSO4","TSO6","LRO","VLAN_HWTSO","LINKSTATE","RXCSUM_IPV6","TXCSUM_IPV6","HWRXTSTMP","NOMAP"],"hw_address":"00:07:43:3d:b7:88","media":"Ethernet 10Gbase-LR","media_flags":["full-duplex","rxpause","txpause"],"status":"active","nd6_options":29,"nd6_flags":["PERFORMNUD","IFDISABLED","AUTO_LINKLOCAL"],"plugged":"SFP/SFP+/SFP28 10G Base-LR (LC)","vendor":"INNOLIGHT","vendor_pn":"TR-PX13L-N00","vendor_sn":"INJBL0431986","vendor_date":"2020-01-04","module_temperature":"21.20 C","module_voltage":"3.16 Volts"}]
|
||||
[{"name":"cxl3","flags":8843,"state":["UP","BROADCAST","RUNNING","SIMPLEX","MULTICAST"],"mtu":1500,"type":null,"mac_addr":"00:07:43:3d:b7:70","ipv4_addr":null,"ipv4_mask":null,"ipv4_bcast":null,"ipv6_addr":null,"ipv6_mask":null,"ipv6_scope":null,"ipv6_type":null,"metric":0,"rx_packets":null,"rx_errors":null,"rx_dropped":null,"rx_overruns":null,"rx_frame":null,"tx_packets":null,"tx_errors":null,"tx_dropped":null,"tx_overruns":null,"tx_carrier":null,"tx_collisions":null,"rx_bytes":null,"tx_bytes":null,"options":"6ec07bb","options_flags":["RXCSUM","TXCSUM","VLAN_MTU","VLAN_HWTAGGING","JUMBO_MTU","VLAN_HWCSUM","TSO4","TSO6","LRO","VLAN_HWTSO","LINKSTATE","RXCSUM_IPV6","TXCSUM_IPV6","HWRXTSTMP","NOMAP"],"hw_address":"00:07:43:3d:b7:88","media":"Ethernet 10Gbase-LR","media_flags":["full-duplex","rxpause","txpause"],"status":"active","nd6_options":29,"nd6_flags":["PERFORMNUD","IFDISABLED","AUTO_LINKLOCAL"],"plugged":"SFP/SFP+/SFP28 10G Base-LR (LC)","vendor":"INNOLIGHT","vendor_pn":"TR-PX13L-N00","vendor_sn":"INJBL0431986","vendor_date":"2020-01-04","module_temperature":"21.20 C","module_voltage":"3.16 Volts","lanes":[{"lane":1,"rx_power_mw":0.49,"rx_power_dbm":-3.1,"tx_bias_ma":23.85}]}]
|
||||
|
@ -1 +1 @@
|
||||
[{"name":"cxl3","flags":8843,"state":["UP","BROADCAST","RUNNING","SIMPLEX","MULTICAST"],"mtu":1500,"type":null,"mac_addr":"00:07:43:3d:b7:70","ipv4_addr":null,"ipv4_mask":null,"ipv4_bcast":null,"ipv6_addr":null,"ipv6_mask":null,"ipv6_scope":null,"ipv6_type":null,"metric":0,"rx_packets":null,"rx_errors":null,"rx_dropped":null,"rx_overruns":null,"rx_frame":null,"tx_packets":null,"tx_errors":null,"tx_dropped":null,"tx_overruns":null,"tx_carrier":null,"tx_collisions":null,"rx_bytes":null,"tx_bytes":null,"options":"6ec07bb","options_flags":["RXCSUM","TXCSUM","VLAN_MTU","VLAN_HWTAGGING","JUMBO_MTU","VLAN_HWCSUM","TSO4","TSO6","LRO","VLAN_HWTSO","LINKSTATE","RXCSUM_IPV6","TXCSUM_IPV6","HWRXTSTMP","NOMAP"],"hw_address":"00:07:43:3d:b7:88","media":null,"media_flags":null,"status":"active","nd6_options":29,"nd6_flags":["PERFORMNUD","IFDISABLED","AUTO_LINKLOCAL"],"plugged":"SFP/SFP+/SFP28 10G Base-LR (LC)","vendor":"INNOLIGHT","vendor_pn":"TR-PX13L-N00","vendor_sn":"INJBL0431986","vendor_date":"2020-01-04","module_temperature":"21.20 C","module_voltage":"3.16 Volts"}]
|
||||
[{"name":"cxl3","flags":8843,"state":["UP","BROADCAST","RUNNING","SIMPLEX","MULTICAST"],"mtu":1500,"type":null,"mac_addr":"00:07:43:3d:b7:70","ipv4_addr":null,"ipv4_mask":null,"ipv4_bcast":null,"ipv6_addr":null,"ipv6_mask":null,"ipv6_scope":null,"ipv6_type":null,"metric":0,"rx_packets":null,"rx_errors":null,"rx_dropped":null,"rx_overruns":null,"rx_frame":null,"tx_packets":null,"tx_errors":null,"tx_dropped":null,"tx_overruns":null,"tx_carrier":null,"tx_collisions":null,"rx_bytes":null,"tx_bytes":null,"options":"6ec07bb","options_flags":["RXCSUM","TXCSUM","VLAN_MTU","VLAN_HWTAGGING","JUMBO_MTU","VLAN_HWCSUM","TSO4","TSO6","LRO","VLAN_HWTSO","LINKSTATE","RXCSUM_IPV6","TXCSUM_IPV6","HWRXTSTMP","NOMAP"],"hw_address":"00:07:43:3d:b7:88","media":null,"media_flags":null,"status":"active","nd6_options":29,"nd6_flags":["PERFORMNUD","IFDISABLED","AUTO_LINKLOCAL"],"plugged":"SFP/SFP+/SFP28 10G Base-LR (LC)","vendor":"INNOLIGHT","vendor_pn":"TR-PX13L-N00","vendor_sn":"INJBL0431986","vendor_date":"2020-01-04","module_temperature":"21.20 C","module_voltage":"3.16 Volts","lanes":[{"lane":1,"rx_power_mw":0.49,"rx_power_dbm":-3.1,"tx_bias_ma":23.85}]}]
|
||||
|
1
tests/fixtures/freebsd12/ifconfig-extra-fields4.json
vendored
Normal file
1
tests/fixtures/freebsd12/ifconfig-extra-fields4.json
vendored
Normal file
@ -0,0 +1 @@
|
||||
[{"name":"mce0","flags":8863,"state":["UP","BROADCAST","RUNNING","SIMPLEX","MULTICAST"],"mtu":1500,"type":null,"mac_addr":"0c:42:a1:2b:03:74","ipv4_addr":"23.246.26.130","ipv4_mask":"255.255.255.128","ipv4_bcast":"23.246.26.255","ipv6_addr":"2a00:86c0:1026:1026::130","ipv6_mask":64,"ipv6_scope":null,"ipv6_type":null,"metric":0,"rx_packets":null,"rx_errors":null,"rx_dropped":null,"rx_overruns":null,"rx_frame":null,"tx_packets":null,"tx_errors":null,"tx_dropped":null,"tx_overruns":null,"tx_carrier":null,"tx_collisions":null,"rx_bytes":null,"tx_bytes":null,"ipv6_scope_id":null,"media":"Ethernet 100GBase-SR4","media_flags":["full-duplex","rxpause","txpause"],"status":"active","nd6_options":21,"nd6_flags":["PERFORMNUD","AUTO_LINKLOCAL"],"plugged":"QSFP28 Unknown (MPO 1x12 Parallel Optic)","vendor":"INNOLIGHT","vendor_pn":"TR-FC85S-N00","vendor_sn":"INKAT2930248","vendor_date":"2020-05-20","module_temperature":"49.41 C","module_voltage":"3.29 Volts","ipv4":[{"address":"23.246.26.130","mask":"255.255.255.128","broadcast":"23.246.26.255"}],"ipv6":[{"address":"fe80::e42:a1ff:fe2b:374","scope_id":"mce0","mask":64,"scope":"0x4"},{"address":"2a00:86c0:1026:1026::130","scope_id":null,"mask":64,"scope":null}],"lanes":[{"lane":1,"rx_power_mw":0.98,"rx_power_dbm":-0.11,"tx_bias_ma":7.6},{"lane":2,"rx_power_mw":0.98,"rx_power_dbm":-0.07,"tx_bias_ma":7.6},{"lane":3,"rx_power_mw":1.01,"rx_power_dbm":0.03,"tx_bias_ma":7.6},{"lane":4,"rx_power_mw":0.92,"rx_power_dbm":-0.34,"tx_bias_ma":7.6}]}]
|
16
tests/fixtures/freebsd12/ifconfig-extra-fields4.out
vendored
Normal file
16
tests/fixtures/freebsd12/ifconfig-extra-fields4.out
vendored
Normal file
@ -0,0 +1,16 @@
|
||||
mce0: flags=8863<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
|
||||
options RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWFILTER,VLAN_HWTSO,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6,HWSTATS,TXRTLMT,HWRXTSTMP,MEXTPG,TXTLS4,TXTLS6,VXLAN_HWCSUM,VXLAN_HWTSO,TXTLS_RTLMT,RXTLS4,RXTLS6
|
||||
ether 0c:42:a1:2b:03:74
|
||||
inet 23.246.26.130 netmask 0xffffff80 broadcast 23.246.26.255
|
||||
inet6 fe80::e42:a1ff:fe2b:374%mce0 prefixlen 64 scopeid 0x4
|
||||
inet6 2a00:86c0:1026:1026::130 prefixlen 64
|
||||
media: Ethernet 100GBase-SR4 <full-duplex,rxpause,txpause>
|
||||
status: active
|
||||
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
|
||||
plugged: QSFP28 Unknown (MPO 1x12 Parallel Optic)
|
||||
vendor: INNOLIGHT PN: TR-FC85S-N00 SN: INKAT2930248 DATE: 2020-05-20
|
||||
module temperature: 49.41 C voltage: 3.29 Volts
|
||||
lane 1: RX power: 0.98 mW (-0.11 dBm) TX bias: 7.60 mA
|
||||
lane 2: RX power: 0.98 mW (-0.07 dBm) TX bias: 7.60 mA
|
||||
lane 3: RX power: 1.01 mW (0.03 dBm) TX bias: 7.60 mA
|
||||
lane 4: RX power: 0.92 mW (-0.34 dBm) TX bias: 7.60 mA
|
1
tests/fixtures/generic/cbt-multiple-columns.json
vendored
Normal file
1
tests/fixtures/generic/cbt-multiple-columns.json
vendored
Normal file
@ -0,0 +1 @@
|
||||
[{"key":"foo","cells":{"bat":{"bar":"baz"},"foo":{"bar1":"baz1","bar2":"baz2"}}}]
|
8
tests/fixtures/generic/cbt-multiple-columns.out
vendored
Normal file
8
tests/fixtures/generic/cbt-multiple-columns.out
vendored
Normal file
@ -0,0 +1,8 @@
|
||||
----------------------------------------
|
||||
foo
|
||||
foo:bar1 @ 1970/01/01-01:00:00.000000
|
||||
"baz1"
|
||||
foo:bar2 @ 1970/01/01-01:00:00.000000
|
||||
"baz2"
|
||||
bat:bar @ 1970/01/01-01:00:00.000000
|
||||
"baz"
|
1
tests/fixtures/generic/cbt-multiple-rows-raw.json
vendored
Normal file
1
tests/fixtures/generic/cbt-multiple-rows-raw.json
vendored
Normal file
@ -0,0 +1 @@
|
||||
[{"key":"foo","cells":[{"column_family":"foo","column":"bar","value":"baz1","timestamp_iso":"2000-01-01T01:00:00","timestamp_epoch":946717200,"timestamp_epoch_utc":null}]},{"key":"bar","cells":[{"column_family":"foo","column":"bar","value":"baz2","timestamp_iso":"2000-01-01T01:00:00","timestamp_epoch":946717200,"timestamp_epoch_utc":null}]}]
|
1
tests/fixtures/generic/cbt-multiple-rows.json
vendored
Normal file
1
tests/fixtures/generic/cbt-multiple-rows.json
vendored
Normal file
@ -0,0 +1 @@
|
||||
[{"key":"foo","cells":{"foo":{"bar":"baz1"}}},{"key":"bar","cells":{"foo":{"bar":"baz2"}}}]
|
8
tests/fixtures/generic/cbt-multiple-rows.out
vendored
Normal file
8
tests/fixtures/generic/cbt-multiple-rows.out
vendored
Normal file
@ -0,0 +1,8 @@
|
||||
----------------------------------------
|
||||
foo
|
||||
foo:bar @ 2000/01/01-01:00:00.000000
|
||||
"baz1"
|
||||
----------------------------------------
|
||||
bar
|
||||
foo:bar @ 2000/01/01-01:00:00.000000
|
||||
"baz2"
|
1
tests/fixtures/generic/cbt-single.json
vendored
Normal file
1
tests/fixtures/generic/cbt-single.json
vendored
Normal file
@ -0,0 +1 @@
|
||||
[{"key":"foo","cells":{"foo":{"bar":"baz"}}}]
|
4
tests/fixtures/generic/cbt-single.out
vendored
Normal file
4
tests/fixtures/generic/cbt-single.out
vendored
Normal file
@ -0,0 +1,4 @@
|
||||
----------------------------------------
|
||||
foo
|
||||
foo:bar @ 1970/01/01-01:00:00.000000
|
||||
"baz"
|
1
tests/fixtures/generic/common-log-format-streaming.json
vendored
Normal file
1
tests/fixtures/generic/common-log-format-streaming.json
vendored
Normal file
File diff suppressed because one or more lines are too long
1
tests/fixtures/generic/common-log-format.json
vendored
Normal file
1
tests/fixtures/generic/common-log-format.json
vendored
Normal file
File diff suppressed because one or more lines are too long
21
tests/fixtures/generic/common-log-format.log
vendored
Normal file
21
tests/fixtures/generic/common-log-format.log
vendored
Normal file
@ -0,0 +1,21 @@
|
||||
127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTPS/1.0" 200 2326
|
||||
1.1.1.2 - - [11/Nov/2016:03:04:55 +0100] "GET /" 200 83 "-" "-" - 9221 1.1.1.1
|
||||
127.0.0.1 - - [11/Nov/2016:14:24:21 +0100] "GET /uno dos" 404 298 "-" "-" - 400233 1.1.1.1
|
||||
127.0.0.1 - - [11/Nov/2016:14:23:37 +0100] "GET /uno dos HTTP/1.0" 404 298 "-" "-" - 385111 1.1.1.1
|
||||
1.1.1.1 - - [11/Nov/2016:00:00:11 +0100] "GET /icc HTTP/1.1" 302 - "-" "XXX XXX XXX" - 6160 11.1.1.1
|
||||
1.1.1.1 - - [11/Nov/2016:00:00:11 +0100] "GET /icc/ HTTP/1.1" 302 - "-" "XXX XXX XXX" - 2981 1.1.1.1
|
||||
unparsable line
|
||||
tarpon.gulf.net - - [12/Jan/1996:20:37:55 +0000] "GET index.htm HTTP/1.0" 200 215
|
||||
tarpon.gulf.net - - [12/Jan/1996:20:37:56 +0000] "POST products.htm HTTP/1.0" 200 215
|
||||
tarpon.gulf.net - - [12/Jan/1996:20:37:57 +0000] "PUT sales.htm HTTP/1.0" 200 215
|
||||
tarpon.gulf.net - - [12/Jan/1996:20:37:58 +0000] "GET /images/log.gif HTTP/1.0" 200 215
|
||||
tarpon.gulf.net - - [12/Jan/1996:20:37:59 +0000] "GET /buttons/form.gif HTTP/1.0" 200 215
|
||||
66.249.66.1 - - [01/Jan/2017:09:00:00 +0000] "GET /contact.html HTTP/1.1" 200 250
|
||||
|
||||
another unparsable line
|
||||
|
||||
66.249.66.1 - - [01/Jan/2017:09:00:00 +0000] "GET /contact.html HTTP/1.1" 200 250 "http://www.example.com/" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
|
||||
127.0.0.1 - frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326 "http://www.example.com/start.html" "Mozilla/4.08 [en] (Win98; I ;Nav)"
|
||||
jay.bird.com - fred [25/Dec/1998:17:45:35 +0000] "GET /~sret1/ HTTP/1.0" 200 1243
|
||||
127.0.0.1 - peter [9/Feb/2017:10:34:12 -0700] "GET /sample-image.png HTTP/2" 200 1479
|
||||
10.1.2.3 - rehg [10/Nov/2021:19:22:12 -0000] "GET /sematext.png HTTP/1.1" 200 3423
|
1
tests/fixtures/generic/git-log-blank-author-fix-streaming.json
vendored
Normal file
1
tests/fixtures/generic/git-log-blank-author-fix-streaming.json
vendored
Normal file
@ -0,0 +1 @@
|
||||
[{"commit":"096fffdb79807d34b99985b38df0a3df7f6a86c7","author":null,"author_email":"foo@example.com","date":"Wed Apr 20 10:03:36 2022 -0400","message":"commit by an author with a blank name","epoch":1650474216,"epoch_utc":null},{"commit":"728d882ed007b3c8b785018874a0eb06e1143b66","author":null,"author_email":null,"date":"Wed Apr 20 09:50:19 2022 -0400","message":"this author has a blank name and an empty email","epoch":1650473419,"epoch_utc":null},{"commit":"b53e42aca623181aa9bc72194e6eeef1e9a3a237","author":"Bob Committer","author_email":null,"date":"Wed Apr 20 09:44:42 2022 -0400","message":"this author has a name, but no email","epoch":1650473082,"epoch_utc":null}]
|
1
tests/fixtures/generic/git-log-blank-author-fix.json
vendored
Normal file
1
tests/fixtures/generic/git-log-blank-author-fix.json
vendored
Normal file
@ -0,0 +1 @@
|
||||
[{"commit":"096fffdb79807d34b99985b38df0a3df7f6a86c7","author":null,"author_email":"foo@example.com","date":"Wed Apr 20 10:03:36 2022 -0400","message":"commit by an author with a blank name","epoch":1650474216,"epoch_utc":null},{"commit":"728d882ed007b3c8b785018874a0eb06e1143b66","author":null,"author_email":null,"date":"Wed Apr 20 09:50:19 2022 -0400","message":"this author has a blank name and an empty email","epoch":1650473419,"epoch_utc":null},{"commit":"b53e42aca623181aa9bc72194e6eeef1e9a3a237","author":"Bob Committer","author_email":null,"date":"Wed Apr 20 09:44:42 2022 -0400","message":"this author has a name, but no email","epoch":1650473082,"epoch_utc":null}]
|
17
tests/fixtures/generic/git-log-blank-author-fix.out
vendored
Normal file
17
tests/fixtures/generic/git-log-blank-author-fix.out
vendored
Normal file
@ -0,0 +1,17 @@
|
||||
commit 096fffdb79807d34b99985b38df0a3df7f6a86c7
|
||||
Author: <foo@example.com>
|
||||
Date: Wed Apr 20 10:03:36 2022 -0400
|
||||
|
||||
commit by an author with a blank name
|
||||
|
||||
commit 728d882ed007b3c8b785018874a0eb06e1143b66
|
||||
Author: <>
|
||||
Date: Wed Apr 20 09:50:19 2022 -0400
|
||||
|
||||
this author has a blank name and an empty email
|
||||
|
||||
commit b53e42aca623181aa9bc72194e6eeef1e9a3a237
|
||||
Author: Bob Committer <>
|
||||
Date: Wed Apr 20 09:44:42 2022 -0400
|
||||
|
||||
this author has a name, but no email
|
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
2
tests/fixtures/generic/git-log-full.json
vendored
2
tests/fixtures/generic/git-log-full.json
vendored
File diff suppressed because one or more lines are too long
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user