1
0
mirror of https://github.com/kellyjonbrazil/jc.git synced 2025-07-13 01:20:24 +02:00

Merge pull request #499 from kellyjonbrazil/dev

v1.24.0
This commit is contained in:
Kelly Brazil
2023-12-17 18:08:02 +00:00
committed by GitHub
97 changed files with 34000 additions and 984 deletions

View File

@ -14,12 +14,12 @@ jobs:
strategy: strategy:
matrix: matrix:
os: [macos-latest, ubuntu-20.04, windows-latest] os: [macos-latest, ubuntu-20.04, windows-latest]
python-version: ["3.6", "3.7", "3.8", "3.9", "3.10", "3.11"] python-version: ["3.6", "3.7", "3.8", "3.9", "3.10", "3.11", "3.12"]
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
- name: "Set up timezone to America/Los_Angeles" - name: "Set up timezone to America/Los_Angeles"
uses: szenius/set-timezone@v1.0 uses: szenius/set-timezone@v1.2
with: with:
timezoneLinux: "America/Los_Angeles" timezoneLinux: "America/Los_Angeles"
timezoneMacos: "America/Los_Angeles" timezoneMacos: "America/Los_Angeles"

1
.gitignore vendored
View File

@ -6,3 +6,4 @@ build/
.github/ .github/
.vscode/ .vscode/
_config.yml _config.yml
.venv

View File

@ -1,5 +1,23 @@
jc changelog jc changelog
20231216 v1.24.0
- Add `debconf-show` command parser
- Add `iftop` command parser
- Add `pkg-index-apk` parser for Alpine Linux Package Index files
- Add `pkg-index-deb` parser for Debian/Ubuntu Package Index files
- Add `proc-cmdline` parser for `/proc/cmdline` file
- Add `swapon` command parser
- Add `tune2fs` command parser
- Remove `iso-datetime` parser deprecated since v1.22.1. (use `datetime-iso` instead)
- Update timezone change in Github Actions for node v16 requirement
- Add Python 3.12 tests to Github Actions
- Refactor `acpi` command parser for code cleanup
- Refactor vendored libraries to remove Python 2 support
- Fix `iptables` parser for cases where the `target` field is blank in a rule
- Fix `vmstat` parsers for some cases where wide output is used
- Fix `mount` parser for cases with spaces in the mount point name
- Fix `xrandr` parser for infinite loop issues
20231023 v1.23.6 20231023 v1.23.6
- Fix XML parser for xmltodict library versions < 0.13.0 - Fix XML parser for xmltodict library versions < 0.13.0
- Fix `who` command parser for cases when the from field contains spaces - Fix `who` command parser for cases when the from field contains spaces

View File

@ -120,6 +120,7 @@ pip3 install jc
| NixOS linux | `nix-env -iA nixpkgs.jc` or `nix-env -iA nixos.jc` | | NixOS linux | `nix-env -iA nixpkgs.jc` or `nix-env -iA nixos.jc` |
| Guix System linux | `guix install jc` | | Guix System linux | `guix install jc` |
| Gentoo Linux | `emerge dev-python/jc` | | Gentoo Linux | `emerge dev-python/jc` |
| Photon linux | `tdnf install jc` |
| macOS | `brew install jc` | | macOS | `brew install jc` |
| FreeBSD | `portsnap fetch update && cd /usr/ports/textproc/py-jc && make install clean` | | FreeBSD | `portsnap fetch update && cd /usr/ports/textproc/py-jc && make install clean` |
| Ansible filter plugin | `ansible-galaxy collection install community.general` | | Ansible filter plugin | `ansible-galaxy collection install community.general` |
@ -178,6 +179,7 @@ option.
| `--csv-s` | CSV file streaming parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/csv_s) | | `--csv-s` | CSV file streaming parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/csv_s) |
| `--date` | `date` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/date) | | `--date` | `date` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/date) |
| `--datetime-iso` | ISO 8601 Datetime string parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/datetime_iso) | | `--datetime-iso` | ISO 8601 Datetime string parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/datetime_iso) |
| `--debconf-show` | `debconf-show` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/debconf_show) |
| `--df` | `df` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/df) | | `--df` | `df` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/df) |
| `--dig` | `dig` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/dig) | | `--dig` | `dig` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/dig) |
| `--dir` | `dir` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/dir) | | `--dir` | `dir` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/dir) |
@ -250,6 +252,8 @@ option.
| `--ping-s` | `ping` and `ping6` command streaming parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/ping_s) | | `--ping-s` | `ping` and `ping6` command streaming parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/ping_s) |
| `--pip-list` | `pip list` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/pip_list) | | `--pip-list` | `pip list` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/pip_list) |
| `--pip-show` | `pip show` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/pip_show) | | `--pip-show` | `pip show` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/pip_show) |
| `--pkg-index-apk` | Alpine Linux Package Index file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/pkg_index_apk) |
| `--pkg-index-deb` | Debian Package Index file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/pkg_index_deb) |
| `--plist` | PLIST file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/plist) | | `--plist` | PLIST file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/plist) |
| `--postconf` | `postconf -M` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/postconf) | | `--postconf` | `postconf -M` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/postconf) |
| `--proc` | `/proc/` file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/proc) | | `--proc` | `/proc/` file parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/proc) |
@ -268,6 +272,7 @@ option.
| `--sshd-conf` | `sshd` config file and `sshd -T` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/sshd_conf) | | `--sshd-conf` | `sshd` config file and `sshd -T` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/sshd_conf) |
| `--stat` | `stat` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/stat) | | `--stat` | `stat` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/stat) |
| `--stat-s` | `stat` command streaming parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/stat_s) | | `--stat-s` | `stat` command streaming parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/stat_s) |
| `--swapon` | `swapon` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/swapon) |
| `--sysctl` | `sysctl` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/sysctl) | | `--sysctl` | `sysctl` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/sysctl) |
| `--syslog` | Syslog RFC 5424 string parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/syslog) | | `--syslog` | Syslog RFC 5424 string parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/syslog) |
| `--syslog-s` | Syslog RFC 5424 string streaming parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/syslog_s) | | `--syslog-s` | Syslog RFC 5424 string streaming parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/syslog_s) |
@ -286,6 +291,7 @@ option.
| `--top-s` | `top -b` command streaming parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/top_s) | | `--top-s` | `top -b` command streaming parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/top_s) |
| `--tracepath` | `tracepath` and `tracepath6` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/tracepath) | | `--tracepath` | `tracepath` and `tracepath6` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/tracepath) |
| `--traceroute` | `traceroute` and `traceroute6` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/traceroute) | | `--traceroute` | `traceroute` and `traceroute6` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/traceroute) |
| `--tune2fs` | `tune2fs -l` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/tune2fs) |
| `--udevadm` | `udevadm info` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/udevadm) | | `--udevadm` | `udevadm info` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/udevadm) |
| `--ufw` | `ufw status` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/ufw) | | `--ufw` | `ufw status` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/ufw) |
| `--ufw-appinfo` | `ufw app info [application]` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/ufw_appinfo) | | `--ufw-appinfo` | `ufw app info [application]` command parser | [details](https://kellyjonbrazil.github.io/jc/docs/parsers/ufw_appinfo) |

View File

@ -3,8 +3,8 @@ _jc()
local cur prev words cword jc_commands jc_parsers jc_options \ local cur prev words cword jc_commands jc_parsers jc_options \
jc_about_options jc_about_mod_options jc_help_options jc_special_options jc_about_options jc_about_mod_options jc_help_options jc_special_options
jc_commands=(acpi airport arp blkid bluetoothctl cbt certbot chage cksum crontab date df dig dmidecode dpkg du env file findmnt finger free git gpg hciconfig host id ifconfig iostat ip iptables iw iwconfig jobs last lastb ls lsattr lsb_release lsblk lsmod lsof lspci lsusb md5 md5sum mdadm mount mpstat netstat nmcli nsd-control ntpq os-prober pidstat ping ping6 pip pip3 postconf printenv ps route rpm rsync sfdisk sha1sum sha224sum sha256sum sha384sum sha512sum shasum ss ssh sshd stat sum sysctl systemctl systeminfo timedatectl top tracepath tracepath6 traceroute traceroute6 udevadm ufw uname update-alternatives upower uptime vdir veracrypt vmstat w wc who xrandr zipinfo zpool) jc_commands=(acpi airport arp blkid bluetoothctl cbt certbot chage cksum crontab date debconf-show df dig dmidecode dpkg du env file findmnt finger free git gpg hciconfig host id ifconfig iostat ip iptables iw iwconfig jobs last lastb ls lsattr lsb_release lsblk lsmod lsof lspci lsusb md5 md5sum mdadm mount mpstat netstat nmcli nsd-control ntpq os-prober pidstat ping ping6 pip pip3 postconf printenv ps route rpm rsync sfdisk sha1sum sha224sum sha256sum sha384sum sha512sum shasum ss ssh sshd stat sum swapon sysctl systemctl systeminfo timedatectl top tracepath tracepath6 traceroute traceroute6 tune2fs udevadm ufw uname update-alternatives upower uptime vdir veracrypt vmstat w wc who xrandr zipinfo zpool)
jc_parsers=(--acpi --airport --airport-s --arp --asciitable --asciitable-m --blkid --bluetoothctl --cbt --cef --cef-s --certbot --chage --cksum --clf --clf-s --crontab --crontab-u --csv --csv-s --date --datetime-iso --df --dig --dir --dmidecode --dpkg-l --du --email-address --env --file --find --findmnt --finger --free --fstab --git-log --git-log-s --git-ls-remote --gpg --group --gshadow --hash --hashsum --hciconfig --history --host --hosts --id --ifconfig --ini --ini-dup --iostat --iostat-s --ip-address --iptables --ip-route --iw-scan --iwconfig --jar-manifest --jobs --jwt --kv --last --ls --ls-s --lsattr --lsb-release --lsblk --lsmod --lsof --lspci --lsusb --m3u --mdadm --mount --mpstat --mpstat-s --netstat --nmcli --nsd-control --ntpq --openvpn --os-prober --os-release --passwd --pci-ids --pgpass --pidstat --pidstat-s --ping --ping-s --pip-list --pip-show --plist --postconf --proc --proc-buddyinfo --proc-consoles --proc-cpuinfo --proc-crypto --proc-devices --proc-diskstats --proc-filesystems --proc-interrupts --proc-iomem --proc-ioports --proc-loadavg --proc-locks --proc-meminfo --proc-modules --proc-mtrr --proc-pagetypeinfo --proc-partitions --proc-slabinfo --proc-softirqs --proc-stat --proc-swaps --proc-uptime --proc-version --proc-vmallocinfo --proc-vmstat --proc-zoneinfo --proc-driver-rtc --proc-net-arp --proc-net-dev --proc-net-dev-mcast --proc-net-if-inet6 --proc-net-igmp --proc-net-igmp6 --proc-net-ipv6-route --proc-net-netlink --proc-net-netstat --proc-net-packet --proc-net-protocols --proc-net-route --proc-net-tcp --proc-net-unix --proc-pid-fdinfo --proc-pid-io --proc-pid-maps --proc-pid-mountinfo --proc-pid-numa-maps --proc-pid-smaps --proc-pid-stat --proc-pid-statm --proc-pid-status --ps --resolve-conf --route --rpm-qi --rsync --rsync-s --semver --sfdisk --shadow --srt --ss --ssh-conf --sshd-conf --stat --stat-s --sysctl --syslog --syslog-s --syslog-bsd --syslog-bsd-s --systemctl --systemctl-lj --systemctl-ls --systemctl-luf --systeminfo --time --timedatectl --timestamp --toml --top --top-s --tracepath --traceroute --udevadm --ufw --ufw-appinfo --uname --update-alt-gs --update-alt-q --upower --uptime --url --ver --veracrypt --vmstat --vmstat-s --w --wc --who --x509-cert --x509-csr --xml --xrandr --yaml --zipinfo --zpool-iostat --zpool-status) jc_parsers=(--acpi --airport --airport-s --arp --asciitable --asciitable-m --blkid --bluetoothctl --cbt --cef --cef-s --certbot --chage --cksum --clf --clf-s --crontab --crontab-u --csv --csv-s --date --datetime-iso --debconf-show --df --dig --dir --dmidecode --dpkg-l --du --email-address --env --file --find --findmnt --finger --free --fstab --git-log --git-log-s --git-ls-remote --gpg --group --gshadow --hash --hashsum --hciconfig --history --host --hosts --id --ifconfig --ini --ini-dup --iostat --iostat-s --ip-address --iptables --ip-route --iw-scan --iwconfig --jar-manifest --jobs --jwt --kv --last --ls --ls-s --lsattr --lsb-release --lsblk --lsmod --lsof --lspci --lsusb --m3u --mdadm --mount --mpstat --mpstat-s --netstat --nmcli --nsd-control --ntpq --openvpn --os-prober --os-release --passwd --pci-ids --pgpass --pidstat --pidstat-s --ping --ping-s --pip-list --pip-show --pkg-index-apk --pkg-index-deb --plist --postconf --proc --proc-buddyinfo --proc-cmdline --proc-consoles --proc-cpuinfo --proc-crypto --proc-devices --proc-diskstats --proc-filesystems --proc-interrupts --proc-iomem --proc-ioports --proc-loadavg --proc-locks --proc-meminfo --proc-modules --proc-mtrr --proc-pagetypeinfo --proc-partitions --proc-slabinfo --proc-softirqs --proc-stat --proc-swaps --proc-uptime --proc-version --proc-vmallocinfo --proc-vmstat --proc-zoneinfo --proc-driver-rtc --proc-net-arp --proc-net-dev --proc-net-dev-mcast --proc-net-if-inet6 --proc-net-igmp --proc-net-igmp6 --proc-net-ipv6-route --proc-net-netlink --proc-net-netstat --proc-net-packet --proc-net-protocols --proc-net-route --proc-net-tcp --proc-net-unix --proc-pid-fdinfo --proc-pid-io --proc-pid-maps --proc-pid-mountinfo --proc-pid-numa-maps --proc-pid-smaps --proc-pid-stat --proc-pid-statm --proc-pid-status --ps --resolve-conf --route --rpm-qi --rsync --rsync-s --semver --sfdisk --shadow --srt --ss --ssh-conf --sshd-conf --stat --stat-s --swapon --sysctl --syslog --syslog-s --syslog-bsd --syslog-bsd-s --systemctl --systemctl-lj --systemctl-ls --systemctl-luf --systeminfo --time --timedatectl --timestamp --toml --top --top-s --tracepath --traceroute --tune2fs --udevadm --ufw --ufw-appinfo --uname --update-alt-gs --update-alt-q --upower --uptime --url --ver --veracrypt --vmstat --vmstat-s --w --wc --who --x509-cert --x509-csr --xml --xrandr --yaml --zipinfo --zpool-iostat --zpool-status)
jc_options=(--force-color -C --debug -d --monochrome -m --meta-out -M --pretty -p --quiet -q --raw -r --unbuffer -u --yaml-out -y) jc_options=(--force-color -C --debug -d --monochrome -m --meta-out -M --pretty -p --quiet -q --raw -r --unbuffer -u --yaml-out -y)
jc_about_options=(--about -a) jc_about_options=(--about -a)
jc_about_mod_options=(--pretty -p --yaml-out -y --monochrome -m --force-color -C) jc_about_mod_options=(--pretty -p --yaml-out -y --monochrome -m --force-color -C)

View File

@ -9,7 +9,7 @@ _jc() {
jc_help_options jc_help_options_describe \ jc_help_options jc_help_options_describe \
jc_special_options jc_special_options_describe jc_special_options jc_special_options_describe
jc_commands=(acpi airport arp blkid bluetoothctl cbt certbot chage cksum crontab date df dig dmidecode dpkg du env file findmnt finger free git gpg hciconfig host id ifconfig iostat ip iptables iw iwconfig jobs last lastb ls lsattr lsb_release lsblk lsmod lsof lspci lsusb md5 md5sum mdadm mount mpstat netstat nmcli nsd-control ntpq os-prober pidstat ping ping6 pip pip3 postconf printenv ps route rpm rsync sfdisk sha1sum sha224sum sha256sum sha384sum sha512sum shasum ss ssh sshd stat sum sysctl systemctl systeminfo timedatectl top tracepath tracepath6 traceroute traceroute6 udevadm ufw uname update-alternatives upower uptime vdir veracrypt vmstat w wc who xrandr zipinfo zpool) jc_commands=(acpi airport arp blkid bluetoothctl cbt certbot chage cksum crontab date debconf-show df dig dmidecode dpkg du env file findmnt finger free git gpg hciconfig host id ifconfig iostat ip iptables iw iwconfig jobs last lastb ls lsattr lsb_release lsblk lsmod lsof lspci lsusb md5 md5sum mdadm mount mpstat netstat nmcli nsd-control ntpq os-prober pidstat ping ping6 pip pip3 postconf printenv ps route rpm rsync sfdisk sha1sum sha224sum sha256sum sha384sum sha512sum shasum ss ssh sshd stat sum swapon sysctl systemctl systeminfo timedatectl top tracepath tracepath6 traceroute traceroute6 tune2fs udevadm ufw uname update-alternatives upower uptime vdir veracrypt vmstat w wc who xrandr zipinfo zpool)
jc_commands_describe=( jc_commands_describe=(
'acpi:run "acpi" command with magic syntax.' 'acpi:run "acpi" command with magic syntax.'
'airport:run "airport" command with magic syntax.' 'airport:run "airport" command with magic syntax.'
@ -22,6 +22,7 @@ _jc() {
'cksum:run "cksum" command with magic syntax.' 'cksum:run "cksum" command with magic syntax.'
'crontab:run "crontab" command with magic syntax.' 'crontab:run "crontab" command with magic syntax.'
'date:run "date" command with magic syntax.' 'date:run "date" command with magic syntax.'
'debconf-show:run "debconf-show" command with magic syntax.'
'df:run "df" command with magic syntax.' 'df:run "df" command with magic syntax.'
'dig:run "dig" command with magic syntax.' 'dig:run "dig" command with magic syntax.'
'dmidecode:run "dmidecode" command with magic syntax.' 'dmidecode:run "dmidecode" command with magic syntax.'
@ -87,6 +88,7 @@ _jc() {
'sshd:run "sshd" command with magic syntax.' 'sshd:run "sshd" command with magic syntax.'
'stat:run "stat" command with magic syntax.' 'stat:run "stat" command with magic syntax.'
'sum:run "sum" command with magic syntax.' 'sum:run "sum" command with magic syntax.'
'swapon:run "swapon" command with magic syntax.'
'sysctl:run "sysctl" command with magic syntax.' 'sysctl:run "sysctl" command with magic syntax.'
'systemctl:run "systemctl" command with magic syntax.' 'systemctl:run "systemctl" command with magic syntax.'
'systeminfo:run "systeminfo" command with magic syntax.' 'systeminfo:run "systeminfo" command with magic syntax.'
@ -96,6 +98,7 @@ _jc() {
'tracepath6:run "tracepath6" command with magic syntax.' 'tracepath6:run "tracepath6" command with magic syntax.'
'traceroute:run "traceroute" command with magic syntax.' 'traceroute:run "traceroute" command with magic syntax.'
'traceroute6:run "traceroute6" command with magic syntax.' 'traceroute6:run "traceroute6" command with magic syntax.'
'tune2fs:run "tune2fs" command with magic syntax.'
'udevadm:run "udevadm" command with magic syntax.' 'udevadm:run "udevadm" command with magic syntax.'
'ufw:run "ufw" command with magic syntax.' 'ufw:run "ufw" command with magic syntax.'
'uname:run "uname" command with magic syntax.' 'uname:run "uname" command with magic syntax.'
@ -112,7 +115,7 @@ _jc() {
'zipinfo:run "zipinfo" command with magic syntax.' 'zipinfo:run "zipinfo" command with magic syntax.'
'zpool:run "zpool" command with magic syntax.' 'zpool:run "zpool" command with magic syntax.'
) )
jc_parsers=(--acpi --airport --airport-s --arp --asciitable --asciitable-m --blkid --bluetoothctl --cbt --cef --cef-s --certbot --chage --cksum --clf --clf-s --crontab --crontab-u --csv --csv-s --date --datetime-iso --df --dig --dir --dmidecode --dpkg-l --du --email-address --env --file --find --findmnt --finger --free --fstab --git-log --git-log-s --git-ls-remote --gpg --group --gshadow --hash --hashsum --hciconfig --history --host --hosts --id --ifconfig --ini --ini-dup --iostat --iostat-s --ip-address --iptables --ip-route --iw-scan --iwconfig --jar-manifest --jobs --jwt --kv --last --ls --ls-s --lsattr --lsb-release --lsblk --lsmod --lsof --lspci --lsusb --m3u --mdadm --mount --mpstat --mpstat-s --netstat --nmcli --nsd-control --ntpq --openvpn --os-prober --os-release --passwd --pci-ids --pgpass --pidstat --pidstat-s --ping --ping-s --pip-list --pip-show --plist --postconf --proc --proc-buddyinfo --proc-consoles --proc-cpuinfo --proc-crypto --proc-devices --proc-diskstats --proc-filesystems --proc-interrupts --proc-iomem --proc-ioports --proc-loadavg --proc-locks --proc-meminfo --proc-modules --proc-mtrr --proc-pagetypeinfo --proc-partitions --proc-slabinfo --proc-softirqs --proc-stat --proc-swaps --proc-uptime --proc-version --proc-vmallocinfo --proc-vmstat --proc-zoneinfo --proc-driver-rtc --proc-net-arp --proc-net-dev --proc-net-dev-mcast --proc-net-if-inet6 --proc-net-igmp --proc-net-igmp6 --proc-net-ipv6-route --proc-net-netlink --proc-net-netstat --proc-net-packet --proc-net-protocols --proc-net-route --proc-net-tcp --proc-net-unix --proc-pid-fdinfo --proc-pid-io --proc-pid-maps --proc-pid-mountinfo --proc-pid-numa-maps --proc-pid-smaps --proc-pid-stat --proc-pid-statm --proc-pid-status --ps --resolve-conf --route --rpm-qi --rsync --rsync-s --semver --sfdisk --shadow --srt --ss --ssh-conf --sshd-conf --stat --stat-s --sysctl --syslog --syslog-s --syslog-bsd --syslog-bsd-s --systemctl --systemctl-lj --systemctl-ls --systemctl-luf --systeminfo --time --timedatectl --timestamp --toml --top --top-s --tracepath --traceroute --udevadm --ufw --ufw-appinfo --uname --update-alt-gs --update-alt-q --upower --uptime --url --ver --veracrypt --vmstat --vmstat-s --w --wc --who --x509-cert --x509-csr --xml --xrandr --yaml --zipinfo --zpool-iostat --zpool-status) jc_parsers=(--acpi --airport --airport-s --arp --asciitable --asciitable-m --blkid --bluetoothctl --cbt --cef --cef-s --certbot --chage --cksum --clf --clf-s --crontab --crontab-u --csv --csv-s --date --datetime-iso --debconf-show --df --dig --dir --dmidecode --dpkg-l --du --email-address --env --file --find --findmnt --finger --free --fstab --git-log --git-log-s --git-ls-remote --gpg --group --gshadow --hash --hashsum --hciconfig --history --host --hosts --id --ifconfig --ini --ini-dup --iostat --iostat-s --ip-address --iptables --ip-route --iw-scan --iwconfig --jar-manifest --jobs --jwt --kv --last --ls --ls-s --lsattr --lsb-release --lsblk --lsmod --lsof --lspci --lsusb --m3u --mdadm --mount --mpstat --mpstat-s --netstat --nmcli --nsd-control --ntpq --openvpn --os-prober --os-release --passwd --pci-ids --pgpass --pidstat --pidstat-s --ping --ping-s --pip-list --pip-show --pkg-index-apk --pkg-index-deb --plist --postconf --proc --proc-buddyinfo --proc-cmdline --proc-consoles --proc-cpuinfo --proc-crypto --proc-devices --proc-diskstats --proc-filesystems --proc-interrupts --proc-iomem --proc-ioports --proc-loadavg --proc-locks --proc-meminfo --proc-modules --proc-mtrr --proc-pagetypeinfo --proc-partitions --proc-slabinfo --proc-softirqs --proc-stat --proc-swaps --proc-uptime --proc-version --proc-vmallocinfo --proc-vmstat --proc-zoneinfo --proc-driver-rtc --proc-net-arp --proc-net-dev --proc-net-dev-mcast --proc-net-if-inet6 --proc-net-igmp --proc-net-igmp6 --proc-net-ipv6-route --proc-net-netlink --proc-net-netstat --proc-net-packet --proc-net-protocols --proc-net-route --proc-net-tcp --proc-net-unix --proc-pid-fdinfo --proc-pid-io --proc-pid-maps --proc-pid-mountinfo --proc-pid-numa-maps --proc-pid-smaps --proc-pid-stat --proc-pid-statm --proc-pid-status --ps --resolve-conf --route --rpm-qi --rsync --rsync-s --semver --sfdisk --shadow --srt --ss --ssh-conf --sshd-conf --stat --stat-s --swapon --sysctl --syslog --syslog-s --syslog-bsd --syslog-bsd-s --systemctl --systemctl-lj --systemctl-ls --systemctl-luf --systeminfo --time --timedatectl --timestamp --toml --top --top-s --tracepath --traceroute --tune2fs --udevadm --ufw --ufw-appinfo --uname --update-alt-gs --update-alt-q --upower --uptime --url --ver --veracrypt --vmstat --vmstat-s --w --wc --who --x509-cert --x509-csr --xml --xrandr --yaml --zipinfo --zpool-iostat --zpool-status)
jc_parsers_describe=( jc_parsers_describe=(
'--acpi:`acpi` command parser' '--acpi:`acpi` command parser'
'--airport:`airport -I` command parser' '--airport:`airport -I` command parser'
@ -136,6 +139,7 @@ _jc() {
'--csv-s:CSV file streaming parser' '--csv-s:CSV file streaming parser'
'--date:`date` command parser' '--date:`date` command parser'
'--datetime-iso:ISO 8601 Datetime string parser' '--datetime-iso:ISO 8601 Datetime string parser'
'--debconf-show:`debconf-show` command parser'
'--df:`df` command parser' '--df:`df` command parser'
'--dig:`dig` command parser' '--dig:`dig` command parser'
'--dir:`dir` command parser' '--dir:`dir` command parser'
@ -208,10 +212,13 @@ _jc() {
'--ping-s:`ping` and `ping6` command streaming parser' '--ping-s:`ping` and `ping6` command streaming parser'
'--pip-list:`pip list` command parser' '--pip-list:`pip list` command parser'
'--pip-show:`pip show` command parser' '--pip-show:`pip show` command parser'
'--pkg-index-apk:Alpine Linux Package Index file parser'
'--pkg-index-deb:Debian Package Index file parser'
'--plist:PLIST file parser' '--plist:PLIST file parser'
'--postconf:`postconf -M` command parser' '--postconf:`postconf -M` command parser'
'--proc:`/proc/` file parser' '--proc:`/proc/` file parser'
'--proc-buddyinfo:`/proc/buddyinfo` file parser' '--proc-buddyinfo:`/proc/buddyinfo` file parser'
'--proc-cmdline:`/proc/cmdline` file parser'
'--proc-consoles:`/proc/consoles` file parser' '--proc-consoles:`/proc/consoles` file parser'
'--proc-cpuinfo:`/proc/cpuinfo` file parser' '--proc-cpuinfo:`/proc/cpuinfo` file parser'
'--proc-crypto:`/proc/crypto` file parser' '--proc-crypto:`/proc/crypto` file parser'
@ -276,6 +283,7 @@ _jc() {
'--sshd-conf:`sshd` config file and `sshd -T` command parser' '--sshd-conf:`sshd` config file and `sshd -T` command parser'
'--stat:`stat` command parser' '--stat:`stat` command parser'
'--stat-s:`stat` command streaming parser' '--stat-s:`stat` command streaming parser'
'--swapon:`swapon` command parser'
'--sysctl:`sysctl` command parser' '--sysctl:`sysctl` command parser'
'--syslog:Syslog RFC 5424 string parser' '--syslog:Syslog RFC 5424 string parser'
'--syslog-s:Syslog RFC 5424 string streaming parser' '--syslog-s:Syslog RFC 5424 string streaming parser'
@ -294,6 +302,7 @@ _jc() {
'--top-s:`top -b` command streaming parser' '--top-s:`top -b` command streaming parser'
'--tracepath:`tracepath` and `tracepath6` command parser' '--tracepath:`tracepath` and `tracepath6` command parser'
'--traceroute:`traceroute` and `traceroute6` command parser' '--traceroute:`traceroute` and `traceroute6` command parser'
'--tune2fs:`tune2fs -l` command parser'
'--udevadm:`udevadm info` command parser' '--udevadm:`udevadm info` command parser'
'--ufw:`ufw status` command parser' '--ufw:`ufw status` command parser'
'--ufw-appinfo:`ufw app info [application]` command parser' '--ufw-appinfo:`ufw app info [application]` command parser'

View File

@ -250,4 +250,4 @@ Returns:
### Parser Information ### Parser Information
Compatibility: linux Compatibility: linux
Version 1.6 by Kelly Brazil (kellyjonbrazil@gmail.com) Version 1.7 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@ -0,0 +1,105 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.debconf_show"></a>
# jc.parsers.debconf\_show
jc - JSON Convert `debconf-show` command output parser
Usage (cli):
$ debconf-show onlyoffice-documentserver | jc --debconf-show
or
$ jc debconf-show onlyoffice-documentserver
Usage (module):
import jc
result = jc.parse('debconf_show', debconf_show_command_output)
Schema:
[
{
"asked": boolean,
"packagename": string,
"name": string,
"value": string
}
]
Examples:
$ debconf-show onlyoffice-documentserver | jc --debconf-show -p
[
{
"asked": true,
"packagename": "onlyoffice",
"name": "jwt_secret",
"value": "aL8ei2iereuzee7cuJ6Cahjah1ixee2ah"
},
{
"asked": false,
"packagename": "onlyoffice",
"name": "db_pwd",
"value": "(password omitted)"
},
{
"asked": true,
"packagename": "onlyoffice",
"name": "rabbitmq_pwd",
"value": "(password omitted)"
},
{
"asked": true,
"packagename": "onlyoffice",
"name": "db_port",
"value": "5432"
},
{
"asked": true,
"packagename": "onlyoffice",
"name": "db_user",
"value": "onlyoffice"
},
{
"asked": true,
"packagename": "onlyoffice",
"name": "rabbitmq_proto",
"value": "amqp"
},
{
"asked": true,
"packagename": "onlyoffice",
"name": "cluster_mode",
"value": "false"
}
]
<a id="jc.parsers.debconf_show.parse"></a>
### parse
```python
def parse(data: str,
raw: bool = False,
quiet: bool = False) -> List[JSONDictType]
```
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of Dictionaries. Raw or processed structured data.
### Parser Information
Compatibility: linux
Version 1.0 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@ -30,7 +30,7 @@ Schema:
"num" integer, "num" integer,
"pkts": integer, "pkts": integer,
"bytes": integer, # converted based on suffix "bytes": integer, # converted based on suffix
"target": string, "target": string, # Null if blank
"prot": string, "prot": string,
"opt": string, # "--" = Null "opt": string, # "--" = Null
"in": string, "in": string,
@ -186,4 +186,4 @@ Returns:
### Parser Information ### Parser Information
Compatibility: linux Compatibility: linux
Version 1.8 by Kelly Brazil (kellyjonbrazil@gmail.com) Version 1.9 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@ -1,37 +0,0 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.iso_datetime"></a>
# jc.parsers.iso\_datetime
jc - JSON Convert ISO 8601 Datetime string parser
This parser has been renamed to datetime-iso (cli) or datetime_iso (module).
This parser will be removed in a future version, so please start using
the new parser name.
<a id="jc.parsers.iso_datetime.parse"></a>
### parse
```python
def parse(data, raw=False, quiet=False)
```
This parser is deprecated and calls datetime_iso. Please use datetime_iso
directly. This parser will be removed in the future.
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
Dictionary. Raw or processed structured data.
### Parser Information
Compatibility: linux, aix, freebsd, darwin, win32, cygwin
Version 1.1 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@ -98,4 +98,4 @@ Returns:
### Parser Information ### Parser Information
Compatibility: linux, darwin, freebsd, aix Compatibility: linux, darwin, freebsd, aix
Version 1.8 by Kelly Brazil (kellyjonbrazil@gmail.com) Version 1.9 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@ -0,0 +1,126 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.pkg_index_apk"></a>
# jc.parsers.pkg\_index\_apk
jc - JSON Convert Alpine Linux Package Index files
Usage (cli):
$ cat APKINDEX | jc --pkg-index-apk
Usage (module):
import jc
result = jc.parse('pkg_index_apk', pkg_index_apk_output)
Schema:
[
{
"checksum": string,
"package": string,
"version": string,
"architecture": string,
"package_size": integer,
"installed_size": integer,
"description": string,
"url": string,
"license": string,
"origin": string,
"maintainer": {
"name": string,
"email": string,
},
"build_time": integer,
"commit": string,
"provider_priority": string,
"dependencies": [
string
],
"provides": [
string
],
"install_if": [
string
],
}
]
Example:
$ cat APKINDEX | jc --pkg-index-apk
[
{
"checksum": "Q1znBl9k+RKgY6gl5Eg3iz73KZbLY=",
"package": "yasm",
"version": "1.3.0-r4",
"architecture": "x86_64",
"package_size": 772109,
"installed_size": 1753088,
"description": "A rewrite of NASM to allow for multiple synta...",
"url": "http://www.tortall.net/projects/yasm/",
"license": "BSD-2-Clause",
"origin": "yasm",
"maintainer": {
"name": "Natanael Copa",
"email": "ncopa@alpinelinux.org"
},
"build_time": 1681228881,
"commit": "84a227baf001b6e0208e3352b294e4d7a40e93de",
"dependencies": [
"so:libc.musl-x86_64.so.1"
],
"provides": [
"cmd:vsyasm=1.3.0-r4",
"cmd:yasm=1.3.0-r4",
"cmd:ytasm=1.3.0-r4"
]
}
]
$ cat APKINDEX | jc --pkg-index-apk --raw
[
{
"C": "Q1znBl9k+RKgY6gl5Eg3iz73KZbLY=",
"P": "yasm",
"V": "1.3.0-r4",
"A": "x86_64",
"S": "772109",
"I": "1753088",
"T": "A rewrite of NASM to allow for multiple syntax supported...",
"U": "http://www.tortall.net/projects/yasm/",
"L": "BSD-2-Clause",
"o": "yasm",
"m": "Natanael Copa <ncopa@alpinelinux.org>",
"t": "1681228881",
"c": "84a227baf001b6e0208e3352b294e4d7a40e93de",
"D": "so:libc.musl-x86_64.so.1",
"p": "cmd:vsyasm=1.3.0-r4 cmd:yasm=1.3.0-r4 cmd:ytasm=1.3.0-r4"
},
]
<a id="jc.parsers.pkg_index_apk.parse"></a>
### parse
```python
def parse(data: str, raw: bool = False, quiet: bool = False) -> List[Dict]
```
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of Dictionaries. Raw or processed structured data.
### Parser Information
Compatibility: linux, darwin, cygwin, win32, aix, freebsd
Version 1.0 by Roey Darwish Dror (roey.ghost@gmail.com)

View File

@ -0,0 +1,138 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.pkg_index_deb"></a>
# jc.parsers.pkg\_index\_deb
jc - JSON Convert Debian Package Index file parser
Usage (cli):
$ cat Packages | jc --pkg-index-deb
Usage (module):
import jc
result = jc.parse('pkg_index_deb', pkg_index_deb_output)
Schema:
[
{
"package": string,
"version": string,
"architecture": string,
"section": string,
"priority": string,
"installed_size": integer,
"maintainer": string,
"description": string,
"homepage": string,
"depends": string,
"conflicts": string,
"replaces": string,
"vcs_git": string,
"sha256": string,
"size": integer,
"vcs_git": string,
"filename": string
}
]
Examples:
$ cat Packages | jc --pkg-index-deb
[
{
"package": "aspnetcore-runtime-2.1",
"version": "2.1.22-1",
"architecture": "amd64",
"section": "devel",
"priority": "standard",
"installed_size": 71081,
"maintainer": "Microsoft <nugetaspnet@microsoft.com>",
"description": "Microsoft ASP.NET Core 2.1.22 Shared Framework",
"homepage": "https://www.asp.net/",
"depends": "libc6 (>= 2.14), dotnet-runtime-2.1 (>= 2.1.22)",
"sha256": "48d4e78a7ceff34105411172f4c3e91a0359b3929d84d26a493...",
"size": 21937036,
"filename": "pool/main/a/aspnetcore-runtime-2.1/aspnetcore-run..."
},
{
"package": "azure-functions-core-tools-4",
"version": "4.0.4590-1",
"architecture": "amd64",
"section": "devel",
"priority": "optional",
"maintainer": "Ahmed ElSayed <ahmels@microsoft.com>",
"description": "Azure Function Core Tools v4",
"homepage": "https://docs.microsoft.com/en-us/azure/azure-func...",
"conflicts": "azure-functions-core-tools-2, azure-functions-co...",
"replaces": "azure-functions-core-tools-2, azure-functions-cor...",
"vcs_git": "https://github.com/Azure/azure-functions-core-tool...",
"sha256": "a2a4f99d6d98ba0a46832570285552f2a93bab06cebbda2afc7...",
"size": 124417844,
"filename": "pool/main/a/azure-functions-core-tools-4/azure-fu..."
}
]
$ cat Packages | jc --pkg-index-deb -r
[
{
"package": "aspnetcore-runtime-2.1",
"version": "2.1.22-1",
"architecture": "amd64",
"section": "devel",
"priority": "standard",
"installed_size": "71081",
"maintainer": "Microsoft <nugetaspnet@microsoft.com>",
"description": "Microsoft ASP.NET Core 2.1.22 Shared Framework",
"homepage": "https://www.asp.net/",
"depends": "libc6 (>= 2.14), dotnet-runtime-2.1 (>= 2.1.22)",
"sha256": "48d4e78a7ceff34105411172f4c3e91a0359b3929d84d26a493...",
"size": "21937036",
"filename": "pool/main/a/aspnetcore-runtime-2.1/aspnetcore-run..."
},
{
"package": "azure-functions-core-tools-4",
"version": "4.0.4590-1",
"architecture": "amd64",
"section": "devel",
"priority": "optional",
"maintainer": "Ahmed ElSayed <ahmels@microsoft.com>",
"description": "Azure Function Core Tools v4",
"homepage": "https://docs.microsoft.com/en-us/azure/azure-func...",
"conflicts": "azure-functions-core-tools-2, azure-functions-co...",
"replaces": "azure-functions-core-tools-2, azure-functions-cor...",
"vcs_git": "https://github.com/Azure/azure-functions-core-tool...",
"sha256": "a2a4f99d6d98ba0a46832570285552f2a93bab06cebbda2afc7...",
"size": "124417844",
"filename": "pool/main/a/azure-functions-core-tools-4/azure-fu..."
}
]
<a id="jc.parsers.pkg_index_deb.parse"></a>
### parse
```python
def parse(data: str,
raw: bool = False,
quiet: bool = False) -> List[JSONDictType]
```
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of Dictionaries. Raw or processed structured data.
### Parser Information
Compatibility: linux, darwin, cygwin, win32, aix, freebsd
Version 1.0 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@ -139,4 +139,4 @@ Returns:
### Parser Information ### Parser Information
Compatibility: linux Compatibility: linux
Version 1.1 by Kelly Brazil (kellyjonbrazil@gmail.com) Version 1.2 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@ -0,0 +1,92 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.proc_cmdline"></a>
# jc.parsers.proc\_cmdline
jc - JSON Convert `/proc/cmdline` file parser
Usage (cli):
$ cat /proc/cmdline | jc --proc
or
$ jc /proc/cmdline
or
$ cat /proc/cmdline | jc --proc-cmdline
Usage (module):
import jc
result = jc.parse('proc_cmdline', proc_cmdline_file)
Schema:
{
"<key>": string,
"_options": [
string
]
}
Examples:
$ cat /proc/cmdline | jc --proc -p
{
"BOOT_IMAGE": "clonezilla/live/vmlinuz",
"consoleblank": "0",
"keyboard-options": "grp:ctrl_shift_toggle,lctrl_shift_toggle",
"ethdevice-timeout": "130",
"toram": "filesystem.squashfs",
"boot": "live",
"edd": "on",
"ocs_daemonon": "ssh lighttpd",
"ocs_live_run": "sudo screen /usr/sbin/ocs-sr -g auto -e1 auto -e2 -batch -r -j2 -k -scr -p true restoreparts win7-64 sda1",
"ocs_live_extra_param": "",
"keyboard-layouts": "us,ru",
"ocs_live_batch": "no",
"locales": "ru_RU.UTF-8",
"vga": "788",
"net.ifnames": "0",
"union": "overlay",
"fetch": "http://10.1.1.1/tftpboot/clonezilla/live/filesystem.squashfs",
"ocs_postrun99": "sudo reboot",
"initrd": "clonezilla/live/initrd.img",
"_options": [
"config",
"noswap",
"nolocales",
"nomodeset",
"noprompt",
"nosplash",
"nodmraid",
"components"
]
}
<a id="jc.parsers.proc_cmdline.parse"></a>
### parse
```python
def parse(data: str, raw: bool = False, quiet: bool = False) -> JSONDictType
```
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
Dictionary. Raw or processed structured data.
### Parser Information
Compatibility: linux
Version 1.0 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@ -184,4 +184,4 @@ Returns:
### Parser Information ### Parser Information
Compatibility: linux Compatibility: linux
Version 1.6 by Kelly Brazil (kellyjonbrazil@gmail.com) Version 1.7 by Kelly Brazil (kellyjonbrazil@gmail.com)

69
docs/parsers/swapon.md Normal file
View File

@ -0,0 +1,69 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.swapon"></a>
# jc.parsers.swapon
jc - JSON Convert `swapon` command output parser
Usage (cli):
$ swapon | jc --swapon
or
$ jc swapon
Usage (module):
import jc
result = jc.parse('swapon', swapon_command_output)
Schema:
[
{
"name": string,
"type": string,
"size": integer,
"used": integer,
"priority": integer
}
]
Example:
$ swapon | jc --swapon
[
{
"name": "/swapfile",
"type": "file",
"size": 1073741824,
"used": 0,
"priority": -2
}
]
<a id="jc.parsers.swapon.parse"></a>
### parse
```python
def parse(data: str, raw: bool = False, quiet: bool = False) -> List[_Entry]
```
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
Dictionary. Raw or processed structured data.
### Parser Information
Compatibility: linux, freebsd
Version 1.0 by Roey Darwish Dror (roey.ghost@gmail.com)

235
docs/parsers/tune2fs.md Normal file
View File

@ -0,0 +1,235 @@
[Home](https://kellyjonbrazil.github.io/jc/)
<a id="jc.parsers.tune2fs"></a>
# jc.parsers.tune2fs
jc - JSON Convert `tune2fs -l` command output parser
Usage (cli):
$ tune2fs -l /dev/xvda4 | jc --tune2fs
or
$ jc tune2fs -l /dev/xvda4
Usage (module):
import jc
result = jc.parse('tune2fs', tune2fs_command_output)
Schema:
{
"version": string,
"filesystem_volume_name": string,
"last_mounted_on": string,
"filesystem_uuid": string,
"filesystem_magic_number": string,
"filesystem_revision_number": string,
"filesystem_features": [
string
],
"filesystem_flags": string,
"default_mount_options": string,
"filesystem_state": string,
"errors_behavior": string,
"filesystem_os_type": string,
"inode_count": integer,
"block_count": integer,
"reserved_block_count": integer,
"overhead_clusters": integer,
"free_blocks": integer,
"free_inodes": integer,
"first_block": integer,
"block_size": integer,
"fragment_size": integer,
"group_descriptor_size": integer,
"reserved_gdt_blocks": integer,
"blocks_per_group": integer,
"fragments_per_group": integer,
"inodes_per_group": integer,
"inode_blocks_per_group": integer,
"flex_block_group_size": integer,
"filesystem_created": string,
"filesystem_created_epoch": integer,
"filesystem_created_epoch_utc": integer,
"last_mount_time": string,
"last_mount_time_epoch": integer,
"last_mount_time_epoch_utc": integer,
"last_write_time": string,
"last_write_time_epoch": integer,
"last_write_time_epoch_utc": integer,
"mount_count": integer,
"maximum_mount_count": integer,
"last_checked": string,
"last_checked_epoch": integer,
"last_checked_epoch_utc": integer,
"check_interval": string,
"lifetime_writes": string,
"reserved_blocks_uid": string,
"reserved_blocks_gid": string,
"first_inode": integer,
"inode_size": integer,
"required_extra_isize": integer,
"desired_extra_isize": integer,
"journal_inode": integer,
"default_directory_hash": string,
"directory_hash_seed": string,
"journal_backup": string,
"checksum_type": string,
"checksum": string
}
Examples:
$ tune2fs | jc --tune2fs -p
{
"version": "1.46.2 (28-Feb-2021)",
"filesystem_volume_name": "<none>",
"last_mounted_on": "/home",
"filesystem_uuid": "5fb78e1a-b214-44e2-a309-8e35116d8dd6",
"filesystem_magic_number": "0xEF53",
"filesystem_revision_number": "1 (dynamic)",
"filesystem_features": [
"has_journal",
"ext_attr",
"resize_inode",
"dir_index",
"filetype",
"needs_recovery",
"extent",
"64bit",
"flex_bg",
"sparse_super",
"large_file",
"huge_file",
"dir_nlink",
"extra_isize",
"metadata_csum"
],
"filesystem_flags": "signed_directory_hash",
"default_mount_options": "user_xattr acl",
"filesystem_state": "clean",
"errors_behavior": "Continue",
"filesystem_os_type": "Linux",
"inode_count": 3932160,
"block_count": 15728640,
"reserved_block_count": 786432,
"free_blocks": 15198453,
"free_inodes": 3864620,
"first_block": 0,
"block_size": 4096,
"fragment_size": 4096,
"group_descriptor_size": 64,
"reserved_gdt_blocks": 1024,
"blocks_per_group": 32768,
"fragments_per_group": 32768,
"inodes_per_group": 8192,
"inode_blocks_per_group": 512,
"flex_block_group_size": 16,
"filesystem_created": "Mon Apr 6 15:10:37 2020",
"last_mount_time": "Mon Sep 19 15:16:20 2022",
"last_write_time": "Mon Sep 19 15:16:20 2022",
"mount_count": 14,
"maximum_mount_count": -1,
"last_checked": "Fri Apr 8 15:24:22 2022",
"check_interval": "0 (<none>)",
"lifetime_writes": "203 GB",
"reserved_blocks_uid": "0 (user root)",
"reserved_blocks_gid": "0 (group root)",
"first_inode": 11,
"inode_size": 256,
"required_extra_isize": 32,
"desired_extra_isize": 32,
"journal_inode": 8,
"default_directory_hash": "half_md4",
"directory_hash_seed": "67d5358d-723d-4ce3-b3c0-30ddb433ad9e",
"journal_backup": "inode blocks",
"checksum_type": "crc32c",
"checksum": "0x7809afff",
"filesystem_created_epoch": 1586211037,
"filesystem_created_epoch_utc": null,
"last_mount_time_epoch": 1663625780,
"last_mount_time_epoch_utc": null,
"last_write_time_epoch": 1663625780,
"last_write_time_epoch_utc": null,
"last_checked_epoch": 1649456662,
"last_checked_epoch_utc": null
}
$ tune2fs | jc --tune2fs -p -r
{
"version": "1.46.2 (28-Feb-2021)",
"filesystem_volume_name": "<none>",
"last_mounted_on": "/home",
"filesystem_uuid": "5fb78e1a-b214-44e2-a309-8e35116d8dd6",
"filesystem_magic_number": "0xEF53",
"filesystem_revision_number": "1 (dynamic)",
"filesystem_features": "has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum",
"filesystem_flags": "signed_directory_hash",
"default_mount_options": "user_xattr acl",
"filesystem_state": "clean",
"errors_behavior": "Continue",
"filesystem_os_type": "Linux",
"inode_count": "3932160",
"block_count": "15728640",
"reserved_block_count": "786432",
"free_blocks": "15198453",
"free_inodes": "3864620",
"first_block": "0",
"block_size": "4096",
"fragment_size": "4096",
"group_descriptor_size": "64",
"reserved_gdt_blocks": "1024",
"blocks_per_group": "32768",
"fragments_per_group": "32768",
"inodes_per_group": "8192",
"inode_blocks_per_group": "512",
"flex_block_group_size": "16",
"filesystem_created": "Mon Apr 6 15:10:37 2020",
"last_mount_time": "Mon Sep 19 15:16:20 2022",
"last_write_time": "Mon Sep 19 15:16:20 2022",
"mount_count": "14",
"maximum_mount_count": "-1",
"last_checked": "Fri Apr 8 15:24:22 2022",
"check_interval": "0 (<none>)",
"lifetime_writes": "203 GB",
"reserved_blocks_uid": "0 (user root)",
"reserved_blocks_gid": "0 (group root)",
"first_inode": "11",
"inode_size": "256",
"required_extra_isize": "32",
"desired_extra_isize": "32",
"journal_inode": "8",
"default_directory_hash": "half_md4",
"directory_hash_seed": "67d5358d-723d-4ce3-b3c0-30ddb433ad9e",
"journal_backup": "inode blocks",
"checksum_type": "crc32c",
"checksum": "0x7809afff"
}
<a id="jc.parsers.tune2fs.parse"></a>
### parse
```python
def parse(data: str, raw: bool = False, quiet: bool = False) -> JSONDictType
```
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
Dictionary. Raw or processed structured data.
### Parser Information
Compatibility: linux
Version 1.0 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@ -149,4 +149,4 @@ Returns:
### Parser Information ### Parser Information
Compatibility: linux Compatibility: linux
Version 1.3 by Kelly Brazil (kellyjonbrazil@gmail.com) Version 1.4 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@ -123,4 +123,4 @@ Returns:
### Parser Information ### Parser Information
Compatibility: linux Compatibility: linux
Version 1.2 by Kelly Brazil (kellyjonbrazil@gmail.com) Version 1.3 by Kelly Brazil (kellyjonbrazil@gmail.com)

View File

@ -31,7 +31,8 @@ Schema:
"current_height": integer, "current_height": integer,
"maximum_width": integer, "maximum_width": integer,
"maximum_height": integer, "maximum_height": integer,
"devices": { "devices": [
{
"modes": [ "modes": [
{ {
"resolution_width": integer, "resolution_width": integer,
@ -46,7 +47,8 @@ Schema:
] ]
} }
] ]
}, }
],
"is_connected": boolean, "is_connected": boolean,
"is_primary": boolean, "is_primary": boolean,
"device_name": string, "device_name": string,
@ -62,7 +64,7 @@ Schema:
"rotation": string, "rotation": string,
"reflection": string "reflection": string
} }
], ]
} }
Examples: Examples:
@ -78,7 +80,8 @@ Examples:
"current_height": 1080, "current_height": 1080,
"maximum_width": 32767, "maximum_width": 32767,
"maximum_height": 32767, "maximum_height": 32767,
"devices": { "devices": [
{
"modes": [ "modes": [
{ {
"resolution_width": 1920, "resolution_width": 1920,
@ -122,6 +125,7 @@ Examples:
"rotation": "normal", "rotation": "normal",
"reflection": "normal" "reflection": "normal"
} }
]
} }
] ]
} }
@ -137,7 +141,8 @@ Examples:
"current_height": 1080, "current_height": 1080,
"maximum_width": 32767, "maximum_width": 32767,
"maximum_height": 32767, "maximum_height": 32767,
"devices": { "devices": [
{
"modes": [ "modes": [
{ {
"resolution_width": 1920, "resolution_width": 1920,
@ -184,6 +189,7 @@ Examples:
"rotation": "normal", "rotation": "normal",
"reflection": "normal" "reflection": "normal"
} }
]
} }
] ]
} }
@ -211,4 +217,4 @@ Returns:
### Parser Information ### Parser Information
Compatibility: linux, darwin, cygwin, aix, freebsd Compatibility: linux, darwin, cygwin, aix, freebsd
Version 1.3 by Kevin Lyter (code (at) lyterk.com) Version 1.4 by Kevin Lyter (code (at) lyterk.com)

View File

@ -9,6 +9,7 @@
* [convert\_to\_int](#jc.utils.convert_to_int) * [convert\_to\_int](#jc.utils.convert_to_int)
* [convert\_to\_float](#jc.utils.convert_to_float) * [convert\_to\_float](#jc.utils.convert_to_float)
* [convert\_to\_bool](#jc.utils.convert_to_bool) * [convert\_to\_bool](#jc.utils.convert_to_bool)
* [convert\_size\_to\_int](#jc.utils.convert_size_to_int)
* [input\_type\_check](#jc.utils.input_type_check) * [input\_type\_check](#jc.utils.input_type_check)
* [timestamp](#jc.utils.timestamp) * [timestamp](#jc.utils.timestamp)
* [\_\_init\_\_](#jc.utils.timestamp.__init__) * [\_\_init\_\_](#jc.utils.timestamp.__init__)
@ -178,6 +179,48 @@ Returns:
True/False False unless a 'truthy' number or string is found True/False False unless a 'truthy' number or string is found
('y', 'yes', 'true', '1', 1, -1, etc.) ('y', 'yes', 'true', '1', 1, -1, etc.)
<a id="jc.utils.convert_size_to_int"></a>
### convert\_size\_to\_int
```python
def convert_size_to_int(size: str, binary: bool = False) -> Optional[int]
```
Parse a human readable data size and return the number of bytes.
Parameters:
size: (string) The human readable file size to parse.
binary: (boolean) `True` to use binary multiples of bytes
(base-2) for ambiguous unit symbols and names,
`False` to use decimal multiples of bytes (base-10).
Returns:
integer/None Integer if successful conversion, otherwise None
This function knows how to parse sizes in bytes, kilobytes, megabytes,
gigabytes, terabytes and petabytes. Some examples:
>>> convert_size_to_int('42')
42
>>> convert_size_to_int('13b')
13
>>> convert_size_to_int('5 bytes')
5
>>> convert_size_to_int('1 KB')
1000
>>> convert_size_to_int('1 kilobyte')
1000
>>> convert_size_to_int('1 KiB')
1024
>>> convert_size_to_int('1 KB', binary=True)
1024
>>> convert_size_to_int('1.5 GB')
1500000000
>>> convert_size_to_int('1.5 GB', binary=True)
1610612736
<a id="jc.utils.input_type_check"></a> <a id="jc.utils.input_type_check"></a>
### input\_type\_check ### input\_type\_check

View File

@ -45,10 +45,6 @@ __version_info__ = tuple(int(segment) for segment in __version__.split("."))
import sys import sys
import os import os
PY3 = sys.version_info[0] == 3
if PY3:
unicode = str
if sys.platform.startswith('java'): if sys.platform.startswith('java'):
import platform import platform
@ -490,10 +486,7 @@ def _get_win_folder_from_registry(csidl_name):
registry for this guarantees us the correct answer for all CSIDL_* registry for this guarantees us the correct answer for all CSIDL_*
names. names.
""" """
if PY3:
import winreg as _winreg import winreg as _winreg
else:
import _winreg
shell_folder_name = { shell_folder_name = {
"CSIDL_APPDATA": "AppData", "CSIDL_APPDATA": "AppData",

View File

@ -9,7 +9,7 @@ from .jc_types import ParserInfoType, JSONDictType
from jc import appdirs from jc import appdirs
__version__ = '1.23.6' __version__ = '1.24.0'
parsers: List[str] = [ parsers: List[str] = [
'acpi', 'acpi',
@ -34,6 +34,7 @@ parsers: List[str] = [
'csv-s', 'csv-s',
'date', 'date',
'datetime-iso', 'datetime-iso',
'debconf-show',
'df', 'df',
'dig', 'dig',
'dir', 'dir',
@ -69,7 +70,6 @@ parsers: List[str] = [
'ip-address', 'ip-address',
'iptables', 'iptables',
'ip-route', 'ip-route',
'iso-datetime',
'iw-scan', 'iw-scan',
'iwconfig', 'iwconfig',
'jar-manifest', 'jar-manifest',
@ -107,10 +107,13 @@ parsers: List[str] = [
'ping-s', 'ping-s',
'pip-list', 'pip-list',
'pip-show', 'pip-show',
'pkg-index-apk',
'pkg-index-deb',
'plist', 'plist',
'postconf', 'postconf',
'proc', 'proc',
'proc-buddyinfo', 'proc-buddyinfo',
'proc-cmdline',
'proc-consoles', 'proc-consoles',
'proc-cpuinfo', 'proc-cpuinfo',
'proc-crypto', 'proc-crypto',
@ -175,6 +178,7 @@ parsers: List[str] = [
'sshd-conf', 'sshd-conf',
'stat', 'stat',
'stat-s', 'stat-s',
'swapon',
'sysctl', 'sysctl',
'syslog', 'syslog',
'syslog-s', 'syslog-s',
@ -193,6 +197,7 @@ parsers: List[str] = [
'top-s', 'top-s',
'tracepath', 'tracepath',
'traceroute', 'traceroute',
'tune2fs',
'udevadm', 'udevadm',
'ufw', 'ufw',
'ufw-appinfo', 'ufw-appinfo',

View File

@ -227,7 +227,7 @@ import jc.utils
class info(): class info():
"""Provides parser metadata (version, author, etc.)""" """Provides parser metadata (version, author, etc.)"""
version = '1.6' version = '1.7'
description = '`acpi` command parser' description = '`acpi` command parser'
author = 'Kelly Brazil' author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com' author_email = 'kellyjonbrazil@gmail.com'
@ -337,18 +337,14 @@ def parse(data, raw=False, quiet=False):
output_line['state'] = 'Not charging' output_line['state'] = 'Not charging'
output_line['charge_percent'] = line.split()[-1].rstrip('%,') output_line['charge_percent'] = line.split()[-1].rstrip('%,')
if 'Charging' in line \ if any(word in line for word in ('Charging', 'Discharging', 'Full')):
or 'Discharging' in line \
or 'Full' in line:
output_line['state'] = line.split()[2][:-1] output_line['state'] = line.split()[2][:-1]
output_line['charge_percent'] = line.split()[3].rstrip('%,') output_line['charge_percent'] = line.split()[3].rstrip('%,')
if 'will never fully discharge' in line: if 'will never fully discharge' in line or 'rate information unavailable' in line:
pass pass
elif 'rate information unavailable' not in line: elif 'Charging' in line:
if 'Charging' in line:
output_line['until_charged'] = line.split()[4] output_line['until_charged'] = line.split()[4]
if 'Discharging' in line: elif 'Discharging' in line:
output_line['charge_remaining'] = line.split()[4] output_line['charge_remaining'] = line.split()[4]
if 'design capacity' in line: if 'design capacity' in line:
@ -359,10 +355,7 @@ def parse(data, raw=False, quiet=False):
if obj_type == 'Adapter': if obj_type == 'Adapter':
output_line['type'] = obj_type output_line['type'] = obj_type
output_line['id'] = obj_id output_line['id'] = obj_id
if 'on-line' in line: output_line['on-line'] = 'on-line' in line
output_line['on-line'] = True
else:
output_line['on-line'] = False
if obj_type == 'Thermal': if obj_type == 'Thermal':
output_line['type'] = obj_type output_line['type'] = obj_type

View File

@ -5,7 +5,7 @@ import socket
import struct import struct
from ._errors import unwrap from ._errors import unwrap
from ._types import byte_cls, bytes_to_list, str_cls, type_name from ._types import type_name
def inet_ntop(address_family, packed_ip): def inet_ntop(address_family, packed_ip):
@ -33,7 +33,7 @@ def inet_ntop(address_family, packed_ip):
repr(address_family) repr(address_family)
)) ))
if not isinstance(packed_ip, byte_cls): if not isinstance(packed_ip, bytes):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
packed_ip must be a byte string, not %s packed_ip must be a byte string, not %s
@ -52,7 +52,7 @@ def inet_ntop(address_family, packed_ip):
)) ))
if address_family == socket.AF_INET: if address_family == socket.AF_INET:
return '%d.%d.%d.%d' % tuple(bytes_to_list(packed_ip)) return '%d.%d.%d.%d' % tuple(list(packed_ip))
octets = struct.unpack(b'!HHHHHHHH', packed_ip) octets = struct.unpack(b'!HHHHHHHH', packed_ip)
@ -106,7 +106,7 @@ def inet_pton(address_family, ip_string):
repr(address_family) repr(address_family)
)) ))
if not isinstance(ip_string, str_cls): if not isinstance(ip_string, str):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
ip_string must be a unicode string, not %s ip_string must be a unicode string, not %s

View File

@ -13,25 +13,16 @@ from __future__ import unicode_literals, division, absolute_import, print_functi
from encodings import idna # noqa from encodings import idna # noqa
import codecs import codecs
import re import re
import sys
from ._errors import unwrap from ._errors import unwrap
from ._types import byte_cls, str_cls, type_name, bytes_to_list, int_types from ._types import type_name
if sys.version_info < (3,): from urllib.parse import (
from urlparse import urlsplit, urlunsplit
from urllib import (
quote as urlquote,
unquote as unquote_to_bytes,
)
else:
from urllib.parse import (
quote as urlquote, quote as urlquote,
unquote_to_bytes, unquote_to_bytes,
urlsplit, urlsplit,
urlunsplit, urlunsplit,
) )
def iri_to_uri(value, normalize=False): def iri_to_uri(value, normalize=False):
@ -48,7 +39,7 @@ def iri_to_uri(value, normalize=False):
A byte string of the ASCII-encoded URI A byte string of the ASCII-encoded URI
""" """
if not isinstance(value, str_cls): if not isinstance(value, str):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
value must be a unicode string, not %s value must be a unicode string, not %s
@ -57,18 +48,6 @@ def iri_to_uri(value, normalize=False):
)) ))
scheme = None scheme = None
# Python 2.6 doesn't split properly is the URL doesn't start with http:// or https://
if sys.version_info < (2, 7) and not value.startswith('http://') and not value.startswith('https://'):
real_prefix = None
prefix_match = re.match('^[^:]*://', value)
if prefix_match:
real_prefix = prefix_match.group(0)
value = 'http://' + value[len(real_prefix):]
parsed = urlsplit(value)
if real_prefix:
value = real_prefix + value[7:]
scheme = _urlquote(real_prefix[:-3])
else:
parsed = urlsplit(value) parsed = urlsplit(value)
if scheme is None: if scheme is None:
@ -81,7 +60,7 @@ def iri_to_uri(value, normalize=False):
password = _urlquote(parsed.password, safe='!$&\'()*+,;=') password = _urlquote(parsed.password, safe='!$&\'()*+,;=')
port = parsed.port port = parsed.port
if port is not None: if port is not None:
port = str_cls(port).encode('ascii') port = str(port).encode('ascii')
netloc = b'' netloc = b''
if username is not None: if username is not None:
@ -112,7 +91,7 @@ def iri_to_uri(value, normalize=False):
path = '' path = ''
output = urlunsplit((scheme, netloc, path, query, fragment)) output = urlunsplit((scheme, netloc, path, query, fragment))
if isinstance(output, str_cls): if isinstance(output, str):
output = output.encode('latin1') output = output.encode('latin1')
return output return output
@ -128,7 +107,7 @@ def uri_to_iri(value):
A unicode string of the IRI A unicode string of the IRI
""" """
if not isinstance(value, byte_cls): if not isinstance(value, bytes):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
value must be a byte string, not %s value must be a byte string, not %s
@ -148,7 +127,7 @@ def uri_to_iri(value):
if hostname: if hostname:
hostname = hostname.decode('idna') hostname = hostname.decode('idna')
port = parsed.port port = parsed.port
if port and not isinstance(port, int_types): if port and not isinstance(port, int):
port = port.decode('ascii') port = port.decode('ascii')
netloc = '' netloc = ''
@ -160,7 +139,7 @@ def uri_to_iri(value):
if hostname is not None: if hostname is not None:
netloc += hostname netloc += hostname
if port is not None: if port is not None:
netloc += ':' + str_cls(port) netloc += ':' + str(port)
path = _urlunquote(parsed.path, remap=['/'], preserve=True) path = _urlunquote(parsed.path, remap=['/'], preserve=True)
query = _urlunquote(parsed.query, remap=['&', '='], preserve=True) query = _urlunquote(parsed.query, remap=['&', '='], preserve=True)
@ -182,7 +161,7 @@ def _iri_utf8_errors_handler(exc):
resume at) resume at)
""" """
bytes_as_ints = bytes_to_list(exc.object[exc.start:exc.end]) bytes_as_ints = list(exc.object[exc.start:exc.end])
replacements = ['%%%02x' % num for num in bytes_as_ints] replacements = ['%%%02x' % num for num in bytes_as_ints]
return (''.join(replacements), exc.end) return (''.join(replacements), exc.end)
@ -230,7 +209,7 @@ def _urlquote(string, safe=''):
string = re.sub('%[0-9a-fA-F]{2}', _extract_escape, string) string = re.sub('%[0-9a-fA-F]{2}', _extract_escape, string)
output = urlquote(string.encode('utf-8'), safe=safe.encode('utf-8')) output = urlquote(string.encode('utf-8'), safe=safe.encode('utf-8'))
if not isinstance(output, byte_cls): if not isinstance(output, bytes):
output = output.encode('ascii') output = output.encode('ascii')
# Restore the existing quoted values that we extracted # Restore the existing quoted values that we extracted

View File

@ -1,135 +0,0 @@
# Copyright (c) 2009 Raymond Hettinger
#
# Permission is hereby granted, free of charge, to any person
# obtaining a copy of this software and associated documentation files
# (the "Software"), to deal in the Software without restriction,
# including without limitation the rights to use, copy, modify, merge,
# publish, distribute, sublicense, and/or sell copies of the Software,
# and to permit persons to whom the Software is furnished to do so,
# subject to the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
# OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
# OTHER DEALINGS IN THE SOFTWARE.
import sys
if not sys.version_info < (2, 7):
from collections import OrderedDict
else:
from UserDict import DictMixin
class OrderedDict(dict, DictMixin):
def __init__(self, *args, **kwds):
if len(args) > 1:
raise TypeError('expected at most 1 arguments, got %d' % len(args))
try:
self.__end
except AttributeError:
self.clear()
self.update(*args, **kwds)
def clear(self):
self.__end = end = []
end += [None, end, end] # sentinel node for doubly linked list
self.__map = {} # key --> [key, prev, next]
dict.clear(self)
def __setitem__(self, key, value):
if key not in self:
end = self.__end
curr = end[1]
curr[2] = end[1] = self.__map[key] = [key, curr, end]
dict.__setitem__(self, key, value)
def __delitem__(self, key):
dict.__delitem__(self, key)
key, prev, next_ = self.__map.pop(key)
prev[2] = next_
next_[1] = prev
def __iter__(self):
end = self.__end
curr = end[2]
while curr is not end:
yield curr[0]
curr = curr[2]
def __reversed__(self):
end = self.__end
curr = end[1]
while curr is not end:
yield curr[0]
curr = curr[1]
def popitem(self, last=True):
if not self:
raise KeyError('dictionary is empty')
if last:
key = reversed(self).next()
else:
key = iter(self).next()
value = self.pop(key)
return key, value
def __reduce__(self):
items = [[k, self[k]] for k in self]
tmp = self.__map, self.__end
del self.__map, self.__end
inst_dict = vars(self).copy()
self.__map, self.__end = tmp
if inst_dict:
return (self.__class__, (items,), inst_dict)
return self.__class__, (items,)
def keys(self):
return list(self)
setdefault = DictMixin.setdefault
update = DictMixin.update
pop = DictMixin.pop
values = DictMixin.values
items = DictMixin.items
iterkeys = DictMixin.iterkeys
itervalues = DictMixin.itervalues
iteritems = DictMixin.iteritems
def __repr__(self):
if not self:
return '%s()' % (self.__class__.__name__,)
return '%s(%r)' % (self.__class__.__name__, self.items())
def copy(self):
return self.__class__(self)
@classmethod
def fromkeys(cls, iterable, value=None):
d = cls()
for key in iterable:
d[key] = value
return d
def __eq__(self, other):
if isinstance(other, OrderedDict):
if len(self) != len(other):
return False
for p, q in zip(self.items(), other.items()):
if p != q:
return False
return True
return dict.__eq__(self, other)
def __ne__(self, other):
return not self == other

View File

@ -2,27 +2,9 @@
from __future__ import unicode_literals, division, absolute_import, print_function from __future__ import unicode_literals, division, absolute_import, print_function
import inspect import inspect
import sys
if sys.version_info < (3,): def chr_cls(num):
str_cls = unicode # noqa
byte_cls = str
int_types = (int, long) # noqa
def bytes_to_list(byte_string):
return [ord(b) for b in byte_string]
chr_cls = chr
else:
str_cls = str
byte_cls = bytes
int_types = int
bytes_to_list = list
def chr_cls(num):
return bytes([num]) return bytes([num])

View File

@ -48,8 +48,10 @@ Other type classes are defined that help compose the types listed above.
from __future__ import unicode_literals, division, absolute_import, print_function from __future__ import unicode_literals, division, absolute_import, print_function
from collections import OrderedDict
from datetime import datetime, timedelta from datetime import datetime, timedelta
from fractions import Fraction from fractions import Fraction
from io import BytesIO
import binascii import binascii
import copy import copy
import math import math
@ -58,22 +60,10 @@ import sys
from . import _teletex_codec from . import _teletex_codec
from ._errors import unwrap from ._errors import unwrap
from ._ordereddict import OrderedDict from ._types import type_name, chr_cls
from ._types import type_name, str_cls, byte_cls, int_types, chr_cls
from .parser import _parse, _dump_header from .parser import _parse, _dump_header
from .util import int_to_bytes, int_from_bytes, timezone, extended_datetime, create_timezone, utc_with_dst from .util import int_to_bytes, int_from_bytes, timezone, extended_datetime, create_timezone, utc_with_dst
if sys.version_info <= (3,):
from cStringIO import StringIO as BytesIO
range = xrange # noqa
_PY2 = True
else:
from io import BytesIO
_PY2 = False
_teletex_codec.register() _teletex_codec.register()
@ -220,7 +210,7 @@ class Asn1Value(object):
An instance of the current class An instance of the current class
""" """
if not isinstance(encoded_data, byte_cls): if not isinstance(encoded_data, bytes):
raise TypeError('encoded_data must be a byte string, not %s' % type_name(encoded_data)) raise TypeError('encoded_data must be a byte string, not %s' % type_name(encoded_data))
spec = None spec = None
@ -291,7 +281,7 @@ class Asn1Value(object):
cls = self.__class__ cls = self.__class__
# Allow explicit to be specified as a simple 2-element tuple # Allow explicit to be specified as a simple 2-element tuple
# instead of requiring the user make a nested tuple # instead of requiring the user make a nested tuple
if cls.explicit is not None and isinstance(cls.explicit[0], int_types): if cls.explicit is not None and isinstance(cls.explicit[0], int):
cls.explicit = (cls.explicit, ) cls.explicit = (cls.explicit, )
if hasattr(cls, '_setup'): if hasattr(cls, '_setup'):
self._setup() self._setup()
@ -299,7 +289,7 @@ class Asn1Value(object):
# Normalize tagging values # Normalize tagging values
if explicit is not None: if explicit is not None:
if isinstance(explicit, int_types): if isinstance(explicit, int):
if class_ is None: if class_ is None:
class_ = 'context' class_ = 'context'
explicit = (class_, explicit) explicit = (class_, explicit)
@ -309,7 +299,7 @@ class Asn1Value(object):
tag = None tag = None
if implicit is not None: if implicit is not None:
if isinstance(implicit, int_types): if isinstance(implicit, int):
if class_ is None: if class_ is None:
class_ = 'context' class_ = 'context'
implicit = (class_, implicit) implicit = (class_, implicit)
@ -336,11 +326,11 @@ class Asn1Value(object):
if explicit is not None: if explicit is not None:
# Ensure we have a tuple of 2-element tuples # Ensure we have a tuple of 2-element tuples
if len(explicit) == 2 and isinstance(explicit[1], int_types): if len(explicit) == 2 and isinstance(explicit[1], int):
explicit = (explicit, ) explicit = (explicit, )
for class_, tag in explicit: for class_, tag in explicit:
invalid_class = None invalid_class = None
if isinstance(class_, int_types): if isinstance(class_, int):
if class_ not in CLASS_NUM_TO_NAME_MAP: if class_ not in CLASS_NUM_TO_NAME_MAP:
invalid_class = class_ invalid_class = class_
else: else:
@ -356,7 +346,7 @@ class Asn1Value(object):
repr(invalid_class) repr(invalid_class)
)) ))
if tag is not None: if tag is not None:
if not isinstance(tag, int_types): if not isinstance(tag, int):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
explicit tag must be an integer, not %s explicit tag must be an integer, not %s
@ -379,7 +369,7 @@ class Asn1Value(object):
repr(class_) repr(class_)
)) ))
if tag is not None: if tag is not None:
if not isinstance(tag, int_types): if not isinstance(tag, int):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
implicit tag must be an integer, not %s implicit tag must be an integer, not %s
@ -445,9 +435,6 @@ class Asn1Value(object):
A unicode string A unicode string
""" """
if _PY2:
return self.__bytes__()
else:
return self.__unicode__() return self.__unicode__()
def __repr__(self): def __repr__(self):
@ -456,9 +443,6 @@ class Asn1Value(object):
A unicode string A unicode string
""" """
if _PY2:
return '<%s %s b%s>' % (type_name(self), id(self), repr(self.dump()))
else:
return '<%s %s %s>' % (type_name(self), id(self), repr(self.dump())) return '<%s %s %s>' % (type_name(self), id(self), repr(self.dump()))
def __bytes__(self): def __bytes__(self):
@ -608,9 +592,6 @@ class Asn1Value(object):
self.parsed.debug(nest_level + 2) self.parsed.debug(nest_level + 2)
elif hasattr(self, 'chosen'): elif hasattr(self, 'chosen'):
self.chosen.debug(nest_level + 2) self.chosen.debug(nest_level + 2)
else:
if _PY2 and isinstance(self.native, byte_cls):
print('%s Native: b%s' % (prefix, repr(self.native)))
else: else:
print('%s Native: %s' % (prefix, self.native)) print('%s Native: %s' % (prefix, self.native))
@ -1058,7 +1039,7 @@ class Choice(Asn1Value):
A instance of the current class A instance of the current class
""" """
if not isinstance(encoded_data, byte_cls): if not isinstance(encoded_data, bytes):
raise TypeError('encoded_data must be a byte string, not %s' % type_name(encoded_data)) raise TypeError('encoded_data must be a byte string, not %s' % type_name(encoded_data))
value, _ = _parse_build(encoded_data, spec=cls, spec_params=kwargs, strict=strict) value, _ = _parse_build(encoded_data, spec=cls, spec_params=kwargs, strict=strict)
@ -1425,16 +1406,10 @@ class Concat(object):
def __str__(self): def __str__(self):
""" """
Since str is different in Python 2 and 3, this calls the appropriate
method, __unicode__() or __bytes__()
:return: :return:
A unicode string A unicode string
""" """
if _PY2:
return self.__bytes__()
else:
return self.__unicode__() return self.__unicode__()
def __bytes__(self): def __bytes__(self):
@ -1684,7 +1659,7 @@ class Primitive(Asn1Value):
A byte string A byte string
""" """
if not isinstance(value, byte_cls): if not isinstance(value, bytes):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
%s value must be a byte string, not %s %s value must be a byte string, not %s
@ -1784,7 +1759,7 @@ class AbstractString(Constructable, Primitive):
A unicode string A unicode string
""" """
if not isinstance(value, str_cls): if not isinstance(value, str):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
%s value must be a unicode string, not %s %s value must be a unicode string, not %s
@ -1915,7 +1890,7 @@ class Integer(Primitive, ValueMap):
ValueError - when an invalid value is passed ValueError - when an invalid value is passed
""" """
if isinstance(value, str_cls): if isinstance(value, str):
if self._map is None: if self._map is None:
raise ValueError(unwrap( raise ValueError(unwrap(
''' '''
@ -1935,7 +1910,7 @@ class Integer(Primitive, ValueMap):
value = self._reverse_map[value] value = self._reverse_map[value]
elif not isinstance(value, int_types): elif not isinstance(value, int):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
%s value must be an integer or unicode string when a name_map %s value must be an integer or unicode string when a name_map
@ -2004,7 +1979,7 @@ class _IntegerBitString(object):
# return an empty chunk, for cases like \x23\x80\x00\x00 # return an empty chunk, for cases like \x23\x80\x00\x00
return [] return []
unused_bits_len = ord(self.contents[0]) if _PY2 else self.contents[0] unused_bits_len = self.contents[0]
value = int_from_bytes(self.contents[1:]) value = int_from_bytes(self.contents[1:])
bits = (len(self.contents) - 1) * 8 bits = (len(self.contents) - 1) * 8
@ -2135,7 +2110,7 @@ class BitString(_IntegerBitString, Constructable, Castable, Primitive, ValueMap)
if key in value: if key in value:
bits[index] = 1 bits[index] = 1
value = ''.join(map(str_cls, bits)) value = ''.join(map(str, bits))
elif value.__class__ == tuple: elif value.__class__ == tuple:
if self._map is None: if self._map is None:
@ -2146,7 +2121,7 @@ class BitString(_IntegerBitString, Constructable, Castable, Primitive, ValueMap)
if bit: if bit:
name = self._map.get(index, index) name = self._map.get(index, index)
self._native.add(name) self._native.add(name)
value = ''.join(map(str_cls, value)) value = ''.join(map(str, value))
else: else:
raise TypeError(unwrap( raise TypeError(unwrap(
@ -2220,7 +2195,7 @@ class BitString(_IntegerBitString, Constructable, Castable, Primitive, ValueMap)
A boolean if the bit is set A boolean if the bit is set
""" """
is_int = isinstance(key, int_types) is_int = isinstance(key, int)
if not is_int: if not is_int:
if not isinstance(self._map, dict): if not isinstance(self._map, dict):
raise ValueError(unwrap( raise ValueError(unwrap(
@ -2266,7 +2241,7 @@ class BitString(_IntegerBitString, Constructable, Castable, Primitive, ValueMap)
ValueError - when _map is not set or the key name is invalid ValueError - when _map is not set or the key name is invalid
""" """
is_int = isinstance(key, int_types) is_int = isinstance(key, int)
if not is_int: if not is_int:
if self._map is None: if self._map is None:
raise ValueError(unwrap( raise ValueError(unwrap(
@ -2365,7 +2340,7 @@ class OctetBitString(Constructable, Castable, Primitive):
ValueError - when an invalid value is passed ValueError - when an invalid value is passed
""" """
if not isinstance(value, byte_cls): if not isinstance(value, bytes):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
%s value must be a byte string, not %s %s value must be a byte string, not %s
@ -2435,7 +2410,7 @@ class OctetBitString(Constructable, Castable, Primitive):
List with one tuple, consisting of a byte string and an integer (unused bits) List with one tuple, consisting of a byte string and an integer (unused bits)
""" """
unused_bits_len = ord(self.contents[0]) if _PY2 else self.contents[0] unused_bits_len = self.contents[0]
if not unused_bits_len: if not unused_bits_len:
return [(self.contents[1:], ())] return [(self.contents[1:], ())]
@ -2448,11 +2423,11 @@ class OctetBitString(Constructable, Castable, Primitive):
raise ValueError('Bit string has {0} unused bits'.format(unused_bits_len)) raise ValueError('Bit string has {0} unused bits'.format(unused_bits_len))
mask = (1 << unused_bits_len) - 1 mask = (1 << unused_bits_len) - 1
last_byte = ord(self.contents[-1]) if _PY2 else self.contents[-1] last_byte = self.contents[-1]
# zero out the unused bits in the last byte. # zero out the unused bits in the last byte.
zeroed_byte = last_byte & ~mask zeroed_byte = last_byte & ~mask
value = self.contents[1:-1] + (chr(zeroed_byte) if _PY2 else bytes((zeroed_byte,))) value = self.contents[1:-1] + bytes((zeroed_byte,))
unused_bits = _int_to_bit_tuple(last_byte & mask, unused_bits_len) unused_bits = _int_to_bit_tuple(last_byte & mask, unused_bits_len)
@ -2505,7 +2480,7 @@ class IntegerBitString(_IntegerBitString, Constructable, Castable, Primitive):
ValueError - when an invalid value is passed ValueError - when an invalid value is passed
""" """
if not isinstance(value, int_types): if not isinstance(value, int):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
%s value must be a positive integer, not %s %s value must be a positive integer, not %s
@ -2570,7 +2545,7 @@ class OctetString(Constructable, Castable, Primitive):
A byte string A byte string
""" """
if not isinstance(value, byte_cls): if not isinstance(value, bytes):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
%s value must be a byte string, not %s %s value must be a byte string, not %s
@ -2654,7 +2629,7 @@ class IntegerOctetString(Constructable, Castable, Primitive):
ValueError - when an invalid value is passed ValueError - when an invalid value is passed
""" """
if not isinstance(value, int_types): if not isinstance(value, int):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
%s value must be a positive integer, not %s %s value must be a positive integer, not %s
@ -2752,7 +2727,7 @@ class ParsableOctetString(Constructable, Castable, Primitive):
A byte string A byte string
""" """
if not isinstance(value, byte_cls): if not isinstance(value, bytes):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
%s value must be a byte string, not %s %s value must be a byte string, not %s
@ -2904,7 +2879,7 @@ class ParsableOctetBitString(ParsableOctetString):
ValueError - when an invalid value is passed ValueError - when an invalid value is passed
""" """
if not isinstance(value, byte_cls): if not isinstance(value, bytes):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
%s value must be a byte string, not %s %s value must be a byte string, not %s
@ -2934,7 +2909,7 @@ class ParsableOctetBitString(ParsableOctetString):
A byte string A byte string
""" """
unused_bits_len = ord(self.contents[0]) if _PY2 else self.contents[0] unused_bits_len = self.contents[0]
if unused_bits_len: if unused_bits_len:
raise ValueError('ParsableOctetBitString should have no unused bits') raise ValueError('ParsableOctetBitString should have no unused bits')
@ -3007,7 +2982,7 @@ class ObjectIdentifier(Primitive, ValueMap):
type_name(cls) type_name(cls)
)) ))
if not isinstance(value, str_cls): if not isinstance(value, str):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
value must be a unicode string, not %s value must be a unicode string, not %s
@ -3045,7 +3020,7 @@ class ObjectIdentifier(Primitive, ValueMap):
type_name(cls) type_name(cls)
)) ))
if not isinstance(value, str_cls): if not isinstance(value, str):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
value must be a unicode string, not %s value must be a unicode string, not %s
@ -3079,7 +3054,7 @@ class ObjectIdentifier(Primitive, ValueMap):
ValueError - when an invalid value is passed ValueError - when an invalid value is passed
""" """
if not isinstance(value, str_cls): if not isinstance(value, str):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
%s value must be a unicode string, not %s %s value must be a unicode string, not %s
@ -3153,24 +3128,22 @@ class ObjectIdentifier(Primitive, ValueMap):
part = 0 part = 0
for byte in self.contents: for byte in self.contents:
if _PY2:
byte = ord(byte)
part = part * 128 part = part * 128
part += byte & 127 part += byte & 127
# Last byte in subidentifier has the eighth bit set to 0 # Last byte in subidentifier has the eighth bit set to 0
if byte & 0x80 == 0: if byte & 0x80 == 0:
if len(output) == 0: if len(output) == 0:
if part >= 80: if part >= 80:
output.append(str_cls(2)) output.append(str(2))
output.append(str_cls(part - 80)) output.append(str(part - 80))
elif part >= 40: elif part >= 40:
output.append(str_cls(1)) output.append(str(1))
output.append(str_cls(part - 40)) output.append(str(part - 40))
else: else:
output.append(str_cls(0)) output.append(str(0))
output.append(str_cls(part)) output.append(str(part))
else: else:
output.append(str_cls(part)) output.append(str(part))
part = 0 part = 0
self._dotted = '.'.join(output) self._dotted = '.'.join(output)
@ -3240,7 +3213,7 @@ class Enumerated(Integer):
ValueError - when an invalid value is passed ValueError - when an invalid value is passed
""" """
if not isinstance(value, int_types) and not isinstance(value, str_cls): if not isinstance(value, int) and not isinstance(value, str):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
%s value must be an integer or a unicode string, not %s %s value must be an integer or a unicode string, not %s
@ -3249,7 +3222,7 @@ class Enumerated(Integer):
type_name(value) type_name(value)
)) ))
if isinstance(value, str_cls): if isinstance(value, str):
if value not in self._reverse_map: if value not in self._reverse_map:
raise ValueError(unwrap( raise ValueError(unwrap(
''' '''
@ -3507,7 +3480,7 @@ class Sequence(Asn1Value):
if self.children is None: if self.children is None:
self._parse_children() self._parse_children()
if not isinstance(key, int_types): if not isinstance(key, int):
if key not in self._field_map: if key not in self._field_map:
raise KeyError(unwrap( raise KeyError(unwrap(
''' '''
@ -3554,7 +3527,7 @@ class Sequence(Asn1Value):
if self.children is None: if self.children is None:
self._parse_children() self._parse_children()
if not isinstance(key, int_types): if not isinstance(key, int):
if key not in self._field_map: if key not in self._field_map:
raise KeyError(unwrap( raise KeyError(unwrap(
''' '''
@ -3605,7 +3578,7 @@ class Sequence(Asn1Value):
if self.children is None: if self.children is None:
self._parse_children() self._parse_children()
if not isinstance(key, int_types): if not isinstance(key, int):
if key not in self._field_map: if key not in self._field_map:
raise KeyError(unwrap( raise KeyError(unwrap(
''' '''
@ -4003,7 +3976,7 @@ class Sequence(Asn1Value):
encoded using encoded using
""" """
if not isinstance(field_name, str_cls): if not isinstance(field_name, str):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
field_name must be a unicode string, not %s field_name must be a unicode string, not %s
@ -4051,7 +4024,7 @@ class Sequence(Asn1Value):
try: try:
name = self._fields[index][0] name = self._fields[index][0]
except (IndexError): except (IndexError):
name = str_cls(index) name = str(index)
self._native[name] = child.native self._native[name] = child.native
except (ValueError, TypeError) as e: except (ValueError, TypeError) as e:
self._native = None self._native = None
@ -4879,7 +4852,7 @@ class AbstractTime(AbstractString):
A dict with the parsed values A dict with the parsed values
""" """
string = str_cls(self) string = str(self)
m = self._TIMESTRING_RE.match(string) m = self._TIMESTRING_RE.match(string)
if not m: if not m:
@ -5018,8 +4991,6 @@ class UTCTime(AbstractTime):
raise ValueError('Year of the UTCTime is not in range [1950, 2049], use GeneralizedTime instead') raise ValueError('Year of the UTCTime is not in range [1950, 2049], use GeneralizedTime instead')
value = value.strftime('%y%m%d%H%M%SZ') value = value.strftime('%y%m%d%H%M%SZ')
if _PY2:
value = value.decode('ascii')
AbstractString.set(self, value) AbstractString.set(self, value)
# Set it to None and let the class take care of converting the next # Set it to None and let the class take care of converting the next
@ -5117,8 +5088,6 @@ class GeneralizedTime(AbstractTime):
fraction = '' fraction = ''
value = value.strftime('%Y%m%d%H%M%S') + fraction + 'Z' value = value.strftime('%Y%m%d%H%M%S') + fraction + 'Z'
if _PY2:
value = value.decode('ascii')
AbstractString.set(self, value) AbstractString.set(self, value)
# Set it to None and let the class take care of converting the next # Set it to None and let the class take care of converting the next
@ -5340,7 +5309,7 @@ def _build_id_tuple(params, spec):
else: else:
required_class = 2 required_class = 2
required_tag = params['implicit'] required_tag = params['implicit']
if required_class is not None and not isinstance(required_class, int_types): if required_class is not None and not isinstance(required_class, int):
required_class = CLASS_NAME_TO_NUM_MAP[required_class] required_class = CLASS_NAME_TO_NUM_MAP[required_class]
required_class = params.get('class_', required_class) required_class = params.get('class_', required_class)

View File

@ -20,7 +20,7 @@ import hashlib
import math import math
from ._errors import unwrap, APIException from ._errors import unwrap, APIException
from ._types import type_name, byte_cls from ._types import type_name
from .algos import _ForceNullParameters, DigestAlgorithm, EncryptionAlgorithm, RSAESOAEPParams, RSASSAPSSParams from .algos import _ForceNullParameters, DigestAlgorithm, EncryptionAlgorithm, RSAESOAEPParams, RSASSAPSSParams
from .core import ( from .core import (
Any, Any,
@ -582,7 +582,7 @@ class ECPrivateKey(Sequence):
if self._key_size is None: if self._key_size is None:
# Infer the key_size from the existing private key if possible # Infer the key_size from the existing private key if possible
pkey_contents = self['private_key'].contents pkey_contents = self['private_key'].contents
if isinstance(pkey_contents, byte_cls) and len(pkey_contents) > 1: if isinstance(pkey_contents, bytes) and len(pkey_contents) > 1:
self.set_key_size(len(self['private_key'].contents)) self.set_key_size(len(self['private_key'].contents))
elif self._key_size is not None: elif self._key_size is not None:
@ -744,7 +744,7 @@ class PrivateKeyInfo(Sequence):
A PrivateKeyInfo object A PrivateKeyInfo object
""" """
if not isinstance(private_key, byte_cls) and not isinstance(private_key, Asn1Value): if not isinstance(private_key, bytes) and not isinstance(private_key, Asn1Value):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
private_key must be a byte string or Asn1Value, not %s private_key must be a byte string or Asn1Value, not %s
@ -1112,7 +1112,7 @@ class PublicKeyInfo(Sequence):
A PublicKeyInfo object A PublicKeyInfo object
""" """
if not isinstance(public_key, byte_cls) and not isinstance(public_key, Asn1Value): if not isinstance(public_key, bytes) and not isinstance(public_key, Asn1Value):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
public_key must be a byte string or Asn1Value, not %s public_key must be a byte string or Asn1Value, not %s
@ -1268,7 +1268,7 @@ class PublicKeyInfo(Sequence):
""" """
if self._sha1 is None: if self._sha1 is None:
self._sha1 = hashlib.sha1(byte_cls(self['public_key'])).digest() self._sha1 = hashlib.sha1(bytes(self['public_key'])).digest()
return self._sha1 return self._sha1
@property @property
@ -1279,7 +1279,7 @@ class PublicKeyInfo(Sequence):
""" """
if self._sha256 is None: if self._sha256 is None:
self._sha256 = hashlib.sha256(byte_cls(self['public_key'])).digest() self._sha256 = hashlib.sha256(bytes(self['public_key'])).digest()
return self._sha256 return self._sha256
@property @property

View File

@ -15,10 +15,9 @@ from __future__ import unicode_literals, division, absolute_import, print_functi
import sys import sys
from ._types import byte_cls, chr_cls, type_name from ._types import chr_cls, type_name
from .util import int_from_bytes, int_to_bytes from .util import int_from_bytes, int_to_bytes
_PY2 = sys.version_info <= (3,)
_INSUFFICIENT_DATA_MESSAGE = 'Insufficient data - %s bytes requested but only %s available' _INSUFFICIENT_DATA_MESSAGE = 'Insufficient data - %s bytes requested but only %s available'
_MAX_DEPTH = 10 _MAX_DEPTH = 10
@ -66,7 +65,7 @@ def emit(class_, method, tag, contents):
if tag < 0: if tag < 0:
raise ValueError('tag must be greater than zero, not %s' % tag) raise ValueError('tag must be greater than zero, not %s' % tag)
if not isinstance(contents, byte_cls): if not isinstance(contents, bytes):
raise TypeError('contents must be a byte string, not %s' % type_name(contents)) raise TypeError('contents must be a byte string, not %s' % type_name(contents))
return _dump_header(class_, method, tag, contents) + contents return _dump_header(class_, method, tag, contents) + contents
@ -101,7 +100,7 @@ def parse(contents, strict=False):
- 5: byte string trailer - 5: byte string trailer
""" """
if not isinstance(contents, byte_cls): if not isinstance(contents, bytes):
raise TypeError('contents must be a byte string, not %s' % type_name(contents)) raise TypeError('contents must be a byte string, not %s' % type_name(contents))
contents_len = len(contents) contents_len = len(contents)
@ -130,7 +129,7 @@ def peek(contents):
An integer with the number of bytes occupied by the ASN.1 value An integer with the number of bytes occupied by the ASN.1 value
""" """
if not isinstance(contents, byte_cls): if not isinstance(contents, bytes):
raise TypeError('contents must be a byte string, not %s' % type_name(contents)) raise TypeError('contents must be a byte string, not %s' % type_name(contents))
info, consumed = _parse(contents, len(contents)) info, consumed = _parse(contents, len(contents))
@ -171,7 +170,7 @@ def _parse(encoded_data, data_len, pointer=0, lengths_only=False, depth=0):
if data_len < pointer + 1: if data_len < pointer + 1:
raise ValueError(_INSUFFICIENT_DATA_MESSAGE % (1, data_len - pointer)) raise ValueError(_INSUFFICIENT_DATA_MESSAGE % (1, data_len - pointer))
first_octet = ord(encoded_data[pointer]) if _PY2 else encoded_data[pointer] first_octet = encoded_data[pointer]
pointer += 1 pointer += 1
@ -183,7 +182,7 @@ def _parse(encoded_data, data_len, pointer=0, lengths_only=False, depth=0):
while True: while True:
if data_len < pointer + 1: if data_len < pointer + 1:
raise ValueError(_INSUFFICIENT_DATA_MESSAGE % (1, data_len - pointer)) raise ValueError(_INSUFFICIENT_DATA_MESSAGE % (1, data_len - pointer))
num = ord(encoded_data[pointer]) if _PY2 else encoded_data[pointer] num = encoded_data[pointer]
pointer += 1 pointer += 1
if num == 0x80 and tag == 0: if num == 0x80 and tag == 0:
raise ValueError('Non-minimal tag encoding') raise ValueError('Non-minimal tag encoding')
@ -196,7 +195,7 @@ def _parse(encoded_data, data_len, pointer=0, lengths_only=False, depth=0):
if data_len < pointer + 1: if data_len < pointer + 1:
raise ValueError(_INSUFFICIENT_DATA_MESSAGE % (1, data_len - pointer)) raise ValueError(_INSUFFICIENT_DATA_MESSAGE % (1, data_len - pointer))
length_octet = ord(encoded_data[pointer]) if _PY2 else encoded_data[pointer] length_octet = encoded_data[pointer]
pointer += 1 pointer += 1
trailer = b'' trailer = b''

View File

@ -11,17 +11,13 @@ Encoding DER to PEM and decoding PEM to DER. Exports the following items:
from __future__ import unicode_literals, division, absolute_import, print_function from __future__ import unicode_literals, division, absolute_import, print_function
from io import BytesIO
import base64 import base64
import re import re
import sys
from ._errors import unwrap from ._errors import unwrap
from ._types import type_name as _type_name, str_cls, byte_cls from ._types import type_name as _type_name
if sys.version_info < (3,):
from cStringIO import StringIO as BytesIO
else:
from io import BytesIO
def detect(byte_string): def detect(byte_string):
@ -36,7 +32,7 @@ def detect(byte_string):
string string
""" """
if not isinstance(byte_string, byte_cls): if not isinstance(byte_string, bytes):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
byte_string must be a byte string, not %s byte_string must be a byte string, not %s
@ -67,14 +63,14 @@ def armor(type_name, der_bytes, headers=None):
A byte string of the PEM block A byte string of the PEM block
""" """
if not isinstance(der_bytes, byte_cls): if not isinstance(der_bytes, bytes):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
der_bytes must be a byte string, not %s der_bytes must be a byte string, not %s
''' % _type_name(der_bytes) ''' % _type_name(der_bytes)
)) ))
if not isinstance(type_name, str_cls): if not isinstance(type_name, str):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
type_name must be a unicode string, not %s type_name must be a unicode string, not %s
@ -127,7 +123,7 @@ def _unarmor(pem_bytes):
in the form "Name: Value" that are right after the begin line. in the form "Name: Value" that are right after the begin line.
""" """
if not isinstance(pem_bytes, byte_cls): if not isinstance(pem_bytes, bytes):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
pem_bytes must be a byte string, not %s pem_bytes must be a byte string, not %s

View File

@ -20,11 +20,11 @@ from __future__ import unicode_literals, division, absolute_import, print_functi
import math import math
import sys import sys
from datetime import datetime, date, timedelta, tzinfo from collections import OrderedDict
from datetime import datetime, date, timedelta, timezone, tzinfo
from ._errors import unwrap from ._errors import unwrap
from ._iri import iri_to_uri, uri_to_iri # noqa from ._iri import iri_to_uri, uri_to_iri # noqa
from ._ordereddict import OrderedDict # noqa
from ._types import type_name from ._types import type_name
if sys.platform == 'win32': if sys.platform == 'win32':
@ -33,185 +33,8 @@ else:
from socket import inet_ntop, inet_pton # noqa from socket import inet_ntop, inet_pton # noqa
# Python 2
if sys.version_info <= (3,):
def int_to_bytes(value, signed=False, width=None): def int_to_bytes(value, signed=False, width=None):
"""
Converts an integer to a byte string
:param value:
The integer to convert
:param signed:
If the byte string should be encoded using two's complement
:param width:
If None, the minimal possible size (but at least 1),
otherwise an integer of the byte width for the return value
:return:
A byte string
"""
if value == 0 and width == 0:
return b''
# Handle negatives in two's complement
is_neg = False
if signed and value < 0:
is_neg = True
bits = int(math.ceil(len('%x' % abs(value)) / 2.0) * 8)
value = (value + (1 << bits)) % (1 << bits)
hex_str = '%x' % value
if len(hex_str) & 1:
hex_str = '0' + hex_str
output = hex_str.decode('hex')
if signed and not is_neg and ord(output[0:1]) & 0x80:
output = b'\x00' + output
if width is not None:
if len(output) > width:
raise OverflowError('int too big to convert')
if is_neg:
pad_char = b'\xFF'
else:
pad_char = b'\x00'
output = (pad_char * (width - len(output))) + output
elif is_neg and ord(output[0:1]) & 0x80 == 0:
output = b'\xFF' + output
return output
def int_from_bytes(value, signed=False):
"""
Converts a byte string to an integer
:param value:
The byte string to convert
:param signed:
If the byte string should be interpreted using two's complement
:return:
An integer
"""
if value == b'':
return 0
num = long(value.encode("hex"), 16) # noqa
if not signed:
return num
# Check for sign bit and handle two's complement
if ord(value[0:1]) & 0x80:
bit_len = len(value) * 8
return num - (1 << bit_len)
return num
class timezone(tzinfo): # noqa
"""
Implements datetime.timezone for py2.
Only full minute offsets are supported.
DST is not supported.
"""
def __init__(self, offset, name=None):
"""
:param offset:
A timedelta with this timezone's offset from UTC
:param name:
Name of the timezone; if None, generate one.
"""
if not timedelta(hours=-24) < offset < timedelta(hours=24):
raise ValueError('Offset must be in [-23:59, 23:59]')
if offset.seconds % 60 or offset.microseconds:
raise ValueError('Offset must be full minutes')
self._offset = offset
if name is not None:
self._name = name
elif not offset:
self._name = 'UTC'
else:
self._name = 'UTC' + _format_offset(offset)
def __eq__(self, other):
"""
Compare two timezones
:param other:
The other timezone to compare to
:return:
A boolean
"""
if type(other) != timezone:
return False
return self._offset == other._offset
def __getinitargs__(self):
"""
Called by tzinfo.__reduce__ to support pickle and copy.
:return:
offset and name, to be used for __init__
"""
return self._offset, self._name
def tzname(self, dt):
"""
:param dt:
A datetime object; ignored.
:return:
Name of this timezone
"""
return self._name
def utcoffset(self, dt):
"""
:param dt:
A datetime object; ignored.
:return:
A timedelta object with the offset from UTC
"""
return self._offset
def dst(self, dt):
"""
:param dt:
A datetime object; ignored.
:return:
Zero timedelta
"""
return timedelta(0)
timezone.utc = timezone(timedelta(0))
# Python 3
else:
from datetime import timezone # noqa
def int_to_bytes(value, signed=False, width=None):
""" """
Converts an integer to a byte string Converts an integer to a byte string
@ -242,7 +65,7 @@ else:
width = math.ceil(bits_required / 8) or 1 width = math.ceil(bits_required / 8) or 1
return value.to_bytes(width, byteorder='big', signed=signed) return value.to_bytes(width, byteorder='big', signed=signed)
def int_from_bytes(value, signed=False): def int_from_bytes(value, signed=False):
""" """
Converts a byte string to an integer Converts a byte string to an integer

View File

@ -15,6 +15,7 @@ Other type classes are defined that help compose the types listed above.
from __future__ import unicode_literals, division, absolute_import, print_function from __future__ import unicode_literals, division, absolute_import, print_function
from collections import OrderedDict
from contextlib import contextmanager from contextlib import contextmanager
from encodings import idna # noqa from encodings import idna # noqa
import hashlib import hashlib
@ -26,8 +27,7 @@ import unicodedata
from ._errors import unwrap from ._errors import unwrap
from ._iri import iri_to_uri, uri_to_iri from ._iri import iri_to_uri, uri_to_iri
from ._ordereddict import OrderedDict from ._types import type_name
from ._types import type_name, str_cls, bytes_to_list
from .algos import AlgorithmIdentifier, AnyAlgorithmIdentifier, DigestAlgorithm, SignedDigestAlgorithm from .algos import AlgorithmIdentifier, AnyAlgorithmIdentifier, DigestAlgorithm, SignedDigestAlgorithm
from .core import ( from .core import (
Any, Any,
@ -100,7 +100,7 @@ class DNSName(IA5String):
A unicode string A unicode string
""" """
if not isinstance(value, str_cls): if not isinstance(value, str):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
%s value must be a unicode string, not %s %s value must be a unicode string, not %s
@ -131,7 +131,7 @@ class URI(IA5String):
A unicode string A unicode string
""" """
if not isinstance(value, str_cls): if not isinstance(value, str):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
%s value must be a unicode string, not %s %s value must be a unicode string, not %s
@ -215,7 +215,7 @@ class EmailAddress(IA5String):
A unicode string A unicode string
""" """
if not isinstance(value, str_cls): if not isinstance(value, str):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
%s value must be a unicode string, not %s %s value must be a unicode string, not %s
@ -323,7 +323,7 @@ class IPAddress(OctetString):
an IPv6 address or IPv6 address with CIDR an IPv6 address or IPv6 address with CIDR
""" """
if not isinstance(value, str_cls): if not isinstance(value, str):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
%s value must be a unicode string, not %s %s value must be a unicode string, not %s
@ -413,7 +413,7 @@ class IPAddress(OctetString):
if cidr_int is not None: if cidr_int is not None:
cidr_bits = '{0:b}'.format(cidr_int) cidr_bits = '{0:b}'.format(cidr_int)
cidr = len(cidr_bits.rstrip('0')) cidr = len(cidr_bits.rstrip('0'))
value = value + '/' + str_cls(cidr) value = value + '/' + str(cidr)
self._native = value self._native = value
return self._native return self._native
@ -2598,7 +2598,7 @@ class Certificate(Sequence):
""" """
if self._issuer_serial is None: if self._issuer_serial is None:
self._issuer_serial = self.issuer.sha256 + b':' + str_cls(self.serial_number).encode('ascii') self._issuer_serial = self.issuer.sha256 + b':' + str(self.serial_number).encode('ascii')
return self._issuer_serial return self._issuer_serial
@property @property
@ -2647,7 +2647,7 @@ class Certificate(Sequence):
# We untag the element since it is tagged via being a choice from GeneralName # We untag the element since it is tagged via being a choice from GeneralName
issuer = issuer.untag() issuer = issuer.untag()
authority_serial = self.authority_key_identifier_value['authority_cert_serial_number'].native authority_serial = self.authority_key_identifier_value['authority_cert_serial_number'].native
self._authority_issuer_serial = issuer.sha256 + b':' + str_cls(authority_serial).encode('ascii') self._authority_issuer_serial = issuer.sha256 + b':' + str(authority_serial).encode('ascii')
else: else:
self._authority_issuer_serial = None self._authority_issuer_serial = None
return self._authority_issuer_serial return self._authority_issuer_serial
@ -2860,7 +2860,7 @@ class Certificate(Sequence):
with a space between each pair of characters, all uppercase with a space between each pair of characters, all uppercase
""" """
return ' '.join('%02X' % c for c in bytes_to_list(self.sha1)) return ' '.join('%02X' % c for c in list(self.sha1))
@property @property
def sha256(self): def sha256(self):
@ -2882,7 +2882,7 @@ class Certificate(Sequence):
with a space between each pair of characters, all uppercase with a space between each pair of characters, all uppercase
""" """
return ' '.join('%02X' % c for c in bytes_to_list(self.sha256)) return ' '.join('%02X' % c for c in list(self.sha256))
def is_valid_domain_ip(self, domain_ip): def is_valid_domain_ip(self, domain_ip):
""" """
@ -2896,7 +2896,7 @@ class Certificate(Sequence):
A boolean - if the domain or IP is valid for the certificate A boolean - if the domain or IP is valid for the certificate
""" """
if not isinstance(domain_ip, str_cls): if not isinstance(domain_ip, str):
raise TypeError(unwrap( raise TypeError(unwrap(
''' '''
domain_ip must be a unicode string, not %s domain_ip must be a unicode string, not %s

149
jc/parsers/debconf_show.py Normal file
View File

@ -0,0 +1,149 @@
"""jc - JSON Convert `debconf-show` command output parser
Usage (cli):
$ debconf-show onlyoffice-documentserver | jc --debconf-show
or
$ jc debconf-show onlyoffice-documentserver
Usage (module):
import jc
result = jc.parse('debconf_show', debconf_show_command_output)
Schema:
[
{
"asked": boolean,
"packagename": string,
"name": string,
"value": string
}
]
Examples:
$ debconf-show onlyoffice-documentserver | jc --debconf-show -p
[
{
"asked": true,
"packagename": "onlyoffice",
"name": "jwt_secret",
"value": "aL8ei2iereuzee7cuJ6Cahjah1ixee2ah"
},
{
"asked": false,
"packagename": "onlyoffice",
"name": "db_pwd",
"value": "(password omitted)"
},
{
"asked": true,
"packagename": "onlyoffice",
"name": "rabbitmq_pwd",
"value": "(password omitted)"
},
{
"asked": true,
"packagename": "onlyoffice",
"name": "db_port",
"value": "5432"
},
{
"asked": true,
"packagename": "onlyoffice",
"name": "db_user",
"value": "onlyoffice"
},
{
"asked": true,
"packagename": "onlyoffice",
"name": "rabbitmq_proto",
"value": "amqp"
},
{
"asked": true,
"packagename": "onlyoffice",
"name": "cluster_mode",
"value": "false"
}
]
"""
from typing import List, Dict
from jc.jc_types import JSONDictType
import jc.utils
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.0'
description = '`debconf-show` command parser'
author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com'
compatible = ['linux']
tags = ['command']
magic_commands = ['debconf-show']
__version__ = info.version
def _process(proc_data: JSONDictType) -> List[JSONDictType]:
"""
Final processing to conform to the schema.
Parameters:
proc_data: (Dictionary) raw structured data to process
Returns:
List of Dictionaries. Structured to conform to the schema.
"""
return proc_data
def parse(
data: str,
raw: bool = False,
quiet: bool = False
) -> List[JSONDictType]:
"""
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of Dictionaries. Raw or processed structured data.
"""
jc.utils.compatibility(__name__, info.compatible, quiet)
jc.utils.input_type_check(data)
raw_output: List = []
if jc.utils.has_data(data):
for line in filter(None, data.splitlines()):
output_line: Dict = {}
splitline = line.split(':', maxsplit=1)
output_line['asked'] = splitline[0].startswith('*')
packagename, key = splitline[0].split('/', maxsplit=1)
packagename = packagename[2:]
key = key.replace('-', '_')
val = splitline[1].strip()
output_line['packagename'] = packagename
output_line['name'] = key
output_line['value'] = val
raw_output.append(output_line)
return raw_output if raw else _process(raw_output)

689
jc/parsers/iftop.py Normal file
View File

@ -0,0 +1,689 @@
"""jc - JSON Convert `iftop` command output parser
Usage (cli):
$ iftop -i <device> -t -B -s1 | jc --iftop
Usage (module):
import jc
result = jc.parse('iftop', iftop_command_output)
Schema:
[
{
"device": string,
"ip_address": string,
"mac_address": string,
"clients": [
{
"index": integer,
"connections": [
{
"host_name": string,
"host_port": string, # can be service or missing
"last_2s": integer,
"last_10s": integer,
"last_40s": integer,
"cumulative": integer,
"direction": string
}
]
}
]
"total_send_rate": {
"last_2s": integer,
"last_10s": integer,
"last_40s": integer
}
"total_receive_rate": {
"last_2s": integer,
"last_10s": integer,
"last_40s": integer
}
"total_send_and_receive_rate": {
"last_2s": integer,
"last_10s": integer,
"last_40s": integer
}
"peak_rate": {
"last_2s": integer,
"last_10s": integer,
"last_40s": integer
}
"cumulative_rate": {
"last_2s": integer,
"last_10s": integer,
"last_40s": integer
}
}
]
Examples:
$ iftop -i enp0s3 -t -P -s1 | jc --iftop -p
[
{
"device": "enp0s3",
"ip_address": "10.10.15.129",
"mac_address": "08:00:27:c0:4a:4f",
"clients": [
{
"index": 1,
"connections": [
{
"host_name": "ubuntu-2004-clean-01",
"host_port": "ssh",
"last_2s": 448,
"last_10s": 448,
"last_40s": 448,
"cumulative": 112,
"direction": "send"
},
{
"host_name": "10.10.15.72",
"host_port": "40876",
"last_2s": 208,
"last_10s": 208,
"last_40s": 208,
"cumulative": 52,
"direction": "receive"
}
]
}
],
"total_send_rate": {
"last_2s": 448,
"last_10s": 448,
"last_40s": 448
},
"total_receive_rate": {
"last_2s": 208,
"last_10s": 208,
"last_40s": 208
},
"total_send_and_receive_rate": {
"last_2s": 656,
"last_10s": 656,
"last_40s": 656
},
"peak_rate": {
"last_2s": 448,
"last_10s": 208,
"last_40s": 656
},
"cumulative_rate": {
"last_2s": 112,
"last_10s": 52,
"last_40s": 164
}
}
]
$ iftop -i enp0s3 -t -P -s1 | jc --iftop -p -r
[
{
"device": "enp0s3",
"ip_address": "10.10.15.129",
"mac_address": "11:22:33:44:55:66",
"clients": [
{
"index": 1,
"connections": [
{
"host_name": "ubuntu-2004-clean-01",
"host_port": "ssh",
"last_2s": "448b",
"last_10s": "448b",
"last_40s": "448b",
"cumulative": "112B",
"direction": "send"
},
{
"host_name": "10.10.15.72",
"host_port": "40876",
"last_2s": "208b",
"last_10s": "208b",
"last_40s": "208b",
"cumulative": "52B",
"direction": "receive"
}
]
}
],
"total_send_rate": {
"last_2s": "448b",
"last_10s": "448b",
"last_40s": "448b"
},
"total_receive_rate": {
"last_2s": "208b",
"last_10s": "208b",
"last_40s": "208b"
},
"total_send_and_receive_rate": {
"last_2s": "656b",
"last_10s": "656b",
"last_40s": "656b"
},
"peak_rate": {
"last_2s": "448b",
"last_10s": "208b",
"last_40s": "656b"
},
"cumulative_rate": {
"last_2s": "112B",
"last_10s": "52B",
"last_40s": "164B"
}
}
]
"""
import re
from typing import List, Dict
from jc.jc_types import JSONDictType
import jc.utils
from collections import namedtuple
from numbers import Number
class info:
"""Provides parser metadata (version, author, etc.)"""
version = "1.0"
description = "`iftop` command parser"
author = "Ron Green"
author_email = "11993626+georgettica@users.noreply.github.com"
compatible = ["linux"]
tags = ["command"]
__version__ = info.version
def _process(proc_data: List[JSONDictType], quiet: bool = False) -> List[JSONDictType]:
"""
Final processing to conform to the schema.
Parameters:
proc_data: (List of Dictionaries) raw structured data to process
Returns:
List of Dictionaries. Structured to conform to the schema.
"""
string_to_bytes_fields = ["last_2s", "last_10s", "last_40s", "cumulative"]
one_nesting = [
"total_send_rate",
"total_receive_rate",
"total_send_and_receive_rate",
"peak_rate",
"cumulative_rate",
]
if not proc_data:
return proc_data
for entry in proc_data:
# print(f"{entry=}")
for entry_key in entry:
# print(f"{entry_key=}")
if entry_key in one_nesting:
# print(f"{entry[entry_key]=}")
for one_nesting_item_key in entry[entry_key]:
# print(f"{one_nesting_item_key=}")
if one_nesting_item_key in string_to_bytes_fields:
entry[entry_key][one_nesting_item_key] = _parse_size(entry[entry_key][one_nesting_item_key])
elif entry_key == "clients":
for client in entry[entry_key]:
# print(f"{client=}")
if "connections" not in client:
continue
for connection in client["connections"]:
# print(f"{connection=}")
for connection_key in connection:
# print(f"{connection_key=}")
if connection_key in string_to_bytes_fields:
connection[connection_key] = _parse_size(connection[connection_key])
return proc_data
# _parse_size from https://github.com/xolox/python-humanfriendly
# Copyright (c) 2021 Peter Odding
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
# Note: this function can be replaced with jc.utils.convert_size_to_int
# in the future.
def _parse_size(size, binary=False):
"""
Parse a human readable data size and return the number of bytes.
:param size: The human readable file size to parse (a string).
:param binary: :data:`True` to use binary multiples of bytes (base-2) for
ambiguous unit symbols and names, :data:`False` to use
decimal multiples of bytes (base-10).
:returns: The corresponding size in bytes (an integer).
:raises: :exc:`InvalidSize` when the input can't be parsed.
This function knows how to parse sizes in bytes, kilobytes, megabytes,
gigabytes, terabytes and petabytes. Some examples:
>>> from humanfriendly import parse_size
>>> parse_size('42')
42
>>> parse_size('13b')
13
>>> parse_size('5 bytes')
5
>>> parse_size('1 KB')
1000
>>> parse_size('1 kilobyte')
1000
>>> parse_size('1 KiB')
1024
>>> parse_size('1 KB', binary=True)
1024
>>> parse_size('1.5 GB')
1500000000
>>> parse_size('1.5 GB', binary=True)
1610612736
"""
def tokenize(text):
tokenized_input = []
for token in re.split(r'(\d+(?:\.\d+)?)', text):
token = token.strip()
if re.match(r'\d+\.\d+', token):
tokenized_input.append(float(token))
elif token.isdigit():
tokenized_input.append(int(token))
elif token:
tokenized_input.append(token)
return tokenized_input
SizeUnit = namedtuple('SizeUnit', 'divider, symbol, name')
CombinedUnit = namedtuple('CombinedUnit', 'decimal, binary')
disk_size_units = (
CombinedUnit(SizeUnit(1000**1, 'KB', 'kilobyte'), SizeUnit(1024**1, 'KiB', 'kibibyte')),
CombinedUnit(SizeUnit(1000**2, 'MB', 'megabyte'), SizeUnit(1024**2, 'MiB', 'mebibyte')),
CombinedUnit(SizeUnit(1000**3, 'GB', 'gigabyte'), SizeUnit(1024**3, 'GiB', 'gibibyte')),
CombinedUnit(SizeUnit(1000**4, 'TB', 'terabyte'), SizeUnit(1024**4, 'TiB', 'tebibyte')),
CombinedUnit(SizeUnit(1000**5, 'PB', 'petabyte'), SizeUnit(1024**5, 'PiB', 'pebibyte')),
CombinedUnit(SizeUnit(1000**6, 'EB', 'exabyte'), SizeUnit(1024**6, 'EiB', 'exbibyte')),
CombinedUnit(SizeUnit(1000**7, 'ZB', 'zettabyte'), SizeUnit(1024**7, 'ZiB', 'zebibyte')),
CombinedUnit(SizeUnit(1000**8, 'YB', 'yottabyte'), SizeUnit(1024**8, 'YiB', 'yobibyte')),
)
tokens = tokenize(size)
if tokens and isinstance(tokens[0], Number):
# Get the normalized unit (if any) from the tokenized input.
normalized_unit = tokens[1].lower() if len(tokens) == 2 and isinstance(tokens[1], str) else ''
# If the input contains only a number, it's assumed to be the number of
# bytes. The second token can also explicitly reference the unit bytes.
if len(tokens) == 1 or normalized_unit.startswith('b'):
return int(tokens[0])
# Otherwise we expect two tokens: A number and a unit.
if normalized_unit:
# Convert plural units to singular units, for details:
# https://github.com/xolox/python-humanfriendly/issues/26
normalized_unit = normalized_unit.rstrip('s')
for unit in disk_size_units:
# First we check for unambiguous symbols (KiB, MiB, GiB, etc)
# and names (kibibyte, mebibyte, gibibyte, etc) because their
# handling is always the same.
if normalized_unit in (unit.binary.symbol.lower(), unit.binary.name.lower()):
return int(tokens[0] * unit.binary.divider)
# Now we will deal with ambiguous prefixes (K, M, G, etc),
# symbols (KB, MB, GB, etc) and names (kilobyte, megabyte,
# gigabyte, etc) according to the caller's preference.
if (normalized_unit in (unit.decimal.symbol.lower(), unit.decimal.name.lower()) or
normalized_unit.startswith(unit.decimal.symbol[0].lower())):
return int(tokens[0] * (unit.binary.divider if binary else unit.decimal.divider))
# We failed to parse the size specification.
return None
def parse(data: str, raw: bool = False, quiet: bool = False) -> List[JSONDictType]:
"""
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of Dictionaries. Raw or processed structured data.
"""
jc.utils.compatibility(__name__, info.compatible, quiet)
jc.utils.input_type_check(data)
raw_output: List[Dict] = []
interface_item: Dict = {}
current_client: Dict = {}
clients: List = []
is_previous_line_interface = False
saw_already_host_line = False
before_arrow = r"\s+(?P<index>\d+)\s+(?P<host_name>[^\s]+):(?P<host_port>[^\s]+)\s+"
before_arrow_no_port = r"\s+(?P<index>\d+)\s+(?P<host_name>[^\s]+)\s+"
after_arrow_before_newline = r"\s+(?P<send_last_2s>[^\s]+)\s+(?P<send_last_10s>[^\s]+)\s+(?P<send_last_40s>[^\s]+)\s+(?P<send_cumulative>[^\s]+)"
newline_before_arrow = r"\s+(?P<receive_ip>.+):(?P<receive_port>\w+)\s+"
newline_before_arrow_no_port = r"\s+(?P<receive_ip>.+)\s+"
after_arrow_till_end = r"\s+(?P<receive_last_2s>[^\s]+)\s+(?P<receive_last_10s>[^\s]+)\s+(?P<receive_last_40s>[^\s]+)\s+(?P<receive_cumulative>[^\s]+)"
re_linux_clients_before_newline = re.compile(
rf"{before_arrow}=>{after_arrow_before_newline}"
)
re_linux_clients_before_newline_no_port = re.compile(
rf"{before_arrow_no_port}=>{after_arrow_before_newline}"
)
re_linux_clients_after_newline_no_port = re.compile(
rf"{newline_before_arrow_no_port}<={after_arrow_till_end}"
)
re_linux_clients_after_newline = re.compile(
rf"{newline_before_arrow}<={after_arrow_till_end}"
)
re_total_send_rate = re.compile(
r"Total send rate:\s+(?P<total_send_rate_last_2s>[^\s]+)\s+(?P<total_send_rate_last_10s>[^\s]+)\s+(?P<total_send_rate_last_40s>[^\s]+)"
)
re_total_receive_rate = re.compile(
r"Total receive rate:\s+(?P<total_receive_rate_last_2s>[^\s]+)\s+(?P<total_receive_rate_last_10s>[^\s]+)\s+(?P<total_receive_rate_last_40s>[^\s]+)"
)
re_total_send_and_receive_rate = re.compile(
r"Total send and receive rate:\s+(?P<total_send_and_receive_rate_last_2s>[^\s]+)\s+(?P<total_send_and_receive_rate_last_10s>[^\s]+)\s+(?P<total_send_and_receive_rate_last_40s>[^\s]+)"
)
re_peak_rate = re.compile(
r"Peak rate \(sent/received/total\):\s+(?P<peak_rate_sent>[^\s]+)\s+(?P<peak_rate_received>[^\s]+)\s+(?P<peak_rate_total>[^\s]+)"
)
re_cumulative_rate = re.compile(
r"Cumulative \(sent/received/total\):\s+(?P<cumulative_rate_sent>[^\s]+)\s+(?P<cumulative_rate_received>[^\s]+)\s+(?P<cumulative_rate_total>[^\s]+)"
)
jc.utils.compatibility(__name__, info.compatible, quiet)
jc.utils.input_type_check(data)
if not jc.utils.has_data(data):
return raw_output if raw else _process(raw_output, quiet=quiet)
for line in filter(None, data.splitlines()):
if line.startswith("interface:"):
# Example:
# interface: enp0s3
interface_item["device"] = line.split(":")[1].strip()
elif line.startswith("IP address is:"):
# Example:
# IP address is: 10.10.15.129
interface_item["ip_address"] = line.split(":")[1].strip()
elif line.startswith("MAC address is:"):
# Example:
# MAC address is: 08:00:27:c0:4a:4f
# strip off the "MAC address is: " part
data_without_front_list = line.split(":")[1:]
# join the remaining parts back together
data_without_front = ":".join(data_without_front_list)
interface_item["mac_address"] = data_without_front.strip()
elif line.startswith("Listening on"):
# Example:
# Listening on enp0s3
pass
elif (
line.startswith("# Host name (port/service if enabled)")
and not saw_already_host_line
):
saw_already_host_line = True
# Example:
# # Host name (port/service if enabled) last 2s last 10s last 40s cumulative
pass
elif (
line.startswith("# Host name (port/service if enabled)")
and saw_already_host_line
):
old_interface_item, interface_item = interface_item, {}
interface_item.update(
{
"device": old_interface_item["device"],
"ip_address": old_interface_item["ip_address"],
"mac_address": old_interface_item["mac_address"],
}
)
elif "=>" in line and is_previous_line_interface and ":" in line:
# should not happen
pass
elif "=>" in line and not is_previous_line_interface and ":" in line:
# Example:
# 1 ubuntu-2004-clean-01:ssh => 448b 448b 448b 112B
is_previous_line_interface = True
match_raw = re_linux_clients_before_newline.match(line)
if not match_raw:
# this is a bug in iftop
continue
match_dict = match_raw.groupdict()
current_client = {}
current_client["index"] = int(match_dict["index"])
current_client["connections"] = []
current_client_send = {
"host_name": match_dict["host_name"],
"host_port": match_dict["host_port"],
"last_2s": match_dict["send_last_2s"],
"last_10s": match_dict["send_last_10s"],
"last_40s": match_dict["send_last_40s"],
"cumulative": match_dict["send_cumulative"],
"direction": "send",
}
current_client["connections"].append(current_client_send)
# not adding yet as the receive part is not yet parsed
elif "=>" in line and not is_previous_line_interface and ":" not in line:
# should not happen
pass
elif "=>" in line and is_previous_line_interface and ":" not in line:
is_previous_line_interface = True
match_raw = re_linux_clients_before_newline_no_port.match(line)
if not match_raw:
# this is a bug in iftop
continue
match_dict = match_raw.groupdict()
current_client = {}
current_client["index"] = int(match_dict["index"])
current_client["connections"] = []
current_client_send = {
"host_name": match_dict["host_name"],
"last_2s": match_dict["send_last_2s"],
"last_10s": match_dict["send_last_10s"],
"last_40s": match_dict["send_last_40s"],
"cumulative": match_dict["send_cumulative"],
"direction": "send",
}
current_client["connections"].append(current_client_send)
# not adding yet as the receive part is not yet parsed
elif "<=" in line and not is_previous_line_interface and ":" in line:
# should not happen
pass
elif "<=" in line and is_previous_line_interface and ":" in line:
# Example:
# 10.10.15.72:40876 <= 208b 208b 208b 52B
is_previous_line_interface = False
match_raw = re_linux_clients_after_newline.match(line)
if not match_raw:
# this is a bug in iftop
continue
match_dict = match_raw.groupdict()
current_client_receive = {
"host_name": match_dict["receive_ip"],
"host_port": match_dict["receive_port"],
"last_2s": match_dict["receive_last_2s"],
"last_10s": match_dict["receive_last_10s"],
"last_40s": match_dict["receive_last_40s"],
"cumulative": match_dict["receive_cumulative"],
"direction": "receive",
}
current_client["connections"].append(current_client_receive)
clients.append(current_client)
elif "<=" in line and not is_previous_line_interface and ":" not in line:
# should not happen
pass
elif "<=" in line and is_previous_line_interface and ":" not in line:
# Example:
# 10.10.15.72:40876 <= 208b 208b 208b 52B
is_previous_line_interface = False
match_raw = re_linux_clients_after_newline_no_port.match(line)
if not match_raw:
# this is a bug in iftop
continue
match_dict = match_raw.groupdict()
current_client_receive = {
"host_name": match_dict["receive_ip"],
"last_2s": match_dict["receive_last_2s"],
"last_10s": match_dict["receive_last_10s"],
"last_40s": match_dict["receive_last_40s"],
"cumulative": match_dict["receive_cumulative"],
"direction": "receive",
}
current_client["connections"].append(current_client_receive)
clients.append(current_client)
# check if all of the characters are dashes or equal signs
elif all(c == "-" for c in line):
pass
elif line.startswith("Total send rate"):
# Example:
# Total send rate: 448b 448b 448b
match_raw = re_total_send_rate.match(line)
if not match_raw:
# this is a bug in iftop
continue
match_dict = match_raw.groupdict()
interface_item["total_send_rate"] = {}
interface_item["total_send_rate"].update(
{
"last_2s": match_dict["total_send_rate_last_2s"],
"last_10s": match_dict["total_send_rate_last_10s"],
"last_40s": match_dict["total_send_rate_last_40s"],
}
)
elif line.startswith("Total receive rate"):
# Example:
# Total receive rate: 208b 208b 208b
match_raw = re_total_receive_rate.match(line)
if not match_raw:
# this is a bug in iftop
continue
match_dict = match_raw.groupdict()
interface_item["total_receive_rate"] = {}
interface_item["total_receive_rate"].update(
{
"last_2s": match_dict["total_receive_rate_last_2s"],
"last_10s": match_dict["total_receive_rate_last_10s"],
"last_40s": match_dict["total_receive_rate_last_40s"],
}
)
elif line.startswith("Total send and receive rate"):
# Example:
# Total send and receive rate: 656b 656b 656b
match_raw = re_total_send_and_receive_rate.match(line)
if not match_raw:
# this is a bug in iftop
continue
match_dict = match_raw.groupdict()
interface_item["total_send_and_receive_rate"] = {}
interface_item["total_send_and_receive_rate"].update(
{
"last_2s": match_dict["total_send_and_receive_rate_last_2s"],
"last_10s": match_dict["total_send_and_receive_rate_last_10s"],
"last_40s": match_dict["total_send_and_receive_rate_last_40s"],
}
)
elif line.startswith("Peak rate"):
match_raw = re_peak_rate.match(line)
if not match_raw:
# this is a bug in iftop
continue
match_dict = match_raw.groupdict()
interface_item["peak_rate"] = {}
interface_item["peak_rate"].update(
{
"last_2s": match_dict["peak_rate_sent"],
"last_10s": match_dict["peak_rate_received"],
"last_40s": match_dict["peak_rate_total"],
}
)
elif line.startswith("Cumulative"):
match_raw = re_cumulative_rate.match(line)
if not match_raw:
# this is a bug in iftop
continue
match_dict = match_raw.groupdict()
interface_item["cumulative_rate"] = {}
interface_item["cumulative_rate"].update(
{
"last_2s": match_dict["cumulative_rate_sent"],
"last_10s": match_dict["cumulative_rate_received"],
"last_40s": match_dict["cumulative_rate_total"],
}
)
elif all(c == "=" for c in line):
interface_item["clients"] = clients
clients = []
# keep the copy here as without it keeps the objects linked
raw_output.append(interface_item.copy())
return raw_output if raw else _process(raw_output, quiet=quiet)

View File

@ -25,7 +25,7 @@ Schema:
"num" integer, "num" integer,
"pkts": integer, "pkts": integer,
"bytes": integer, # converted based on suffix "bytes": integer, # converted based on suffix
"target": string, "target": string, # Null if blank
"prot": string, "prot": string,
"opt": string, # "--" = Null "opt": string, # "--" = Null
"in": string, "in": string,
@ -163,7 +163,7 @@ import jc.utils
class info(): class info():
"""Provides parser metadata (version, author, etc.)""" """Provides parser metadata (version, author, etc.)"""
version = '1.8' version = '1.9'
description = '`iptables` command parser' description = '`iptables` command parser'
author = 'Kelly Brazil' author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com' author_email = 'kellyjonbrazil@gmail.com'
@ -222,6 +222,10 @@ def _process(proc_data):
if rule['opt'] == '--': if rule['opt'] == '--':
rule['opt'] = None rule['opt'] = None
if 'target' in rule:
if rule['target'] == '':
rule['target'] = None
return proc_data return proc_data
@ -271,15 +275,18 @@ def parse(data, raw=False, quiet=False):
continue continue
else: else:
# sometimes the "target" column is blank. Stuff in a dummy character
if headers[0] == 'target' and line.startswith(' '):
line = '\u2063' + line
rule = line.split(maxsplit=len(headers) - 1) rule = line.split(maxsplit=len(headers) - 1)
temp_rule = dict(zip(headers, rule)) temp_rule = dict(zip(headers, rule))
if temp_rule: if temp_rule:
if temp_rule.get('target') == '\u2063':
temp_rule['target'] = ''
chain['rules'].append(temp_rule) chain['rules'].append(temp_rule)
if chain: if chain:
raw_output.append(chain) raw_output.append(chain)
if raw: return raw_output if raw else _process(raw_output)
return raw_output
else:
return _process(raw_output)

View File

@ -1,46 +0,0 @@
"""jc - JSON Convert ISO 8601 Datetime string parser
This parser has been renamed to datetime-iso (cli) or datetime_iso (module).
This parser will be removed in a future version, so please start using
the new parser name.
"""
from jc.parsers import datetime_iso
import jc.utils
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.1'
description = 'Deprecated - please use datetime-iso'
author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com'
details = 'Deprecated - please use datetime-iso'
compatible = ['linux', 'aix', 'freebsd', 'darwin', 'win32', 'cygwin']
tags = ['standard', 'string']
deprecated = True
__version__ = info.version
def parse(data, raw=False, quiet=False):
"""
This parser is deprecated and calls datetime_iso. Please use datetime_iso
directly. This parser will be removed in the future.
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
Dictionary. Raw or processed structured data.
"""
jc.utils.warning_message([
'iso-datetime parser is deprecated. Please use datetime-iso instead.'
])
return datetime_iso.parse(data, raw=raw, quiet=quiet)

View File

@ -70,12 +70,14 @@ Example:
... ...
] ]
""" """
import re
import jc.utils import jc.utils
class info(): class info():
"""Provides parser metadata (version, author, etc.)""" """Provides parser metadata (version, author, etc.)"""
version = '1.8' version = '1.9'
description = '`mount` command parser' description = '`mount` command parser'
author = 'Kelly Brazil' author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com' author_email = 'kellyjonbrazil@gmail.com'
@ -133,13 +135,25 @@ def _linux_parse(data):
for entry in data: for entry in data:
output_line = {} output_line = {}
parsed_line = entry.split()
output_line['filesystem'] = parsed_line[0] pattern = re.compile(
output_line['mount_point'] = parsed_line[2] r'''
output_line['type'] = parsed_line[4] (?P<filesystem>\S+)\s+
output_line['options'] = parsed_line[5].lstrip('(').rstrip(')').split(',') on\s+
(?P<mount_point>.*?)\s+
type\s+
(?P<type>\S+)\s+
\((?P<options>.*?)\)\s*''',
re.VERBOSE)
match = pattern.match(entry)
groups = match.groupdict()
if groups:
output_line['filesystem'] = groups["filesystem"]
output_line['mount_point'] = groups["mount_point"]
output_line['type'] = groups["type"]
output_line['options'] = groups["options"].split(',')
output.append(output_line) output.append(output_line)
return output return output

View File

@ -28,13 +28,13 @@
# OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED # OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
# OF THE POSSIBILITY OF SUCH DAMAGE. # OF THE POSSIBILITY OF SUCH DAMAGE.
import sys
import string import string
if sys.version_info >= (3, 0):
def unichr(character): # pylint: disable=redefined-builtin def unichr(character): # pylint: disable=redefined-builtin
return chr(character) return chr(character)
def ConvertNEXTSTEPToUnicode(hex_digits): def ConvertNEXTSTEPToUnicode(hex_digits):
# taken from http://ftp.unicode.org/Public/MAPPINGS/VENDORS/NEXT/NEXTSTEP.TXT # taken from http://ftp.unicode.org/Public/MAPPINGS/VENDORS/NEXT/NEXTSTEP.TXT
conversion = { conversion = {

View File

@ -64,12 +64,10 @@ def GetFileEncoding(path):
def OpenFileWithEncoding(file_path, encoding): def OpenFileWithEncoding(file_path, encoding):
return codecs.open(file_path, 'r', encoding=encoding, errors='ignore') return codecs.open(file_path, 'r', encoding=encoding, errors='ignore')
if sys.version_info < (3, 0):
def OpenFile(file_path): def OpenFile(file_path):
return open(file_path, 'rb') return open(file_path, 'rb')
else:
def OpenFile(file_path):
return open(file_path, 'br')
class PBParser(object): class PBParser(object):

View File

@ -32,7 +32,7 @@ import sys
from functools import cmp_to_key from functools import cmp_to_key
# for python 3.10+ compatibility # for python 3.10+ compatibility
if sys.version_info.major == 3 and sys.version_info.minor >= 10: if sys.version_info >= (3, 10):
import collections import collections
setattr(collections, "MutableMapping", collections.abc.MutableMapping) setattr(collections, "MutableMapping", collections.abc.MutableMapping)

220
jc/parsers/pkg_index_apk.py Normal file
View File

@ -0,0 +1,220 @@
"""jc - JSON Convert Alpine Linux Package Index files
Usage (cli):
$ cat APKINDEX | jc --pkg-index-apk
Usage (module):
import jc
result = jc.parse('pkg_index_apk', pkg_index_apk_output)
Schema:
[
{
"checksum": string,
"package": string,
"version": string,
"architecture": string,
"package_size": integer,
"installed_size": integer,
"description": string,
"url": string,
"license": string,
"origin": string,
"maintainer": {
"name": string,
"email": string,
},
"build_time": integer,
"commit": string,
"provider_priority": string,
"dependencies": [
string
],
"provides": [
string
],
"install_if": [
string
],
}
]
Example:
$ cat APKINDEX | jc --pkg-index-apk
[
{
"checksum": "Q1znBl9k+RKgY6gl5Eg3iz73KZbLY=",
"package": "yasm",
"version": "1.3.0-r4",
"architecture": "x86_64",
"package_size": 772109,
"installed_size": 1753088,
"description": "A rewrite of NASM to allow for multiple synta...",
"url": "http://www.tortall.net/projects/yasm/",
"license": "BSD-2-Clause",
"origin": "yasm",
"maintainer": {
"name": "Natanael Copa",
"email": "ncopa@alpinelinux.org"
},
"build_time": 1681228881,
"commit": "84a227baf001b6e0208e3352b294e4d7a40e93de",
"dependencies": [
"so:libc.musl-x86_64.so.1"
],
"provides": [
"cmd:vsyasm=1.3.0-r4",
"cmd:yasm=1.3.0-r4",
"cmd:ytasm=1.3.0-r4"
]
}
]
$ cat APKINDEX | jc --pkg-index-apk --raw
[
{
"C": "Q1znBl9k+RKgY6gl5Eg3iz73KZbLY=",
"P": "yasm",
"V": "1.3.0-r4",
"A": "x86_64",
"S": "772109",
"I": "1753088",
"T": "A rewrite of NASM to allow for multiple syntax supported...",
"U": "http://www.tortall.net/projects/yasm/",
"L": "BSD-2-Clause",
"o": "yasm",
"m": "Natanael Copa <ncopa@alpinelinux.org>",
"t": "1681228881",
"c": "84a227baf001b6e0208e3352b294e4d7a40e93de",
"D": "so:libc.musl-x86_64.so.1",
"p": "cmd:vsyasm=1.3.0-r4 cmd:yasm=1.3.0-r4 cmd:ytasm=1.3.0-r4"
},
]
"""
import re
from typing import List, Dict, Union
import jc.utils
class info:
"""Provides parser metadata (version, author, etc.)"""
version = "1.0"
description = "Alpine Linux Package Index file parser"
author = "Roey Darwish Dror"
author_email = "roey.ghost@gmail.com"
compatible = ['linux', 'darwin', 'cygwin', 'win32', 'aix', 'freebsd']
tags = ['standard', 'file', 'string']
__version__ = info.version
_KEY = {
"C": "checksum",
"P": "package",
"V": "version",
"A": "architecture",
"S": "package_size",
"I": "installed_size",
"T": "description",
"U": "url",
"L": "license",
"o": "origin",
"m": "maintainer",
"t": "build_time",
"c": "commit",
"k": "provider_priority",
"D": "dependencies",
"p": "provides",
"i": "install_if"
}
def _value(key: str, value: str) -> Union[str, int, List[str], Dict[str, str]]:
"""
Convert value to the appropriate type
Parameters:
key: (string) key name
value: (string) value to convert
Returns:
Converted value
"""
if key in ['S', 'I', 't', 'k']:
return int(value)
if key in ['D', 'p', 'i']:
splitted = value.split(' ')
return splitted
if key == "m":
m = re.match(r'(.*) <(.*)>', value)
if m:
return {'name': m.group(1), 'email': m.group(2)}
else:
return {'name': value}
return value
def _process(proc_data: List[Dict]) -> List[Dict]:
"""
Final processing to conform to the schema.
Parameters:
proc_data: (List of Dictionaries) raw structured data to process
Returns:
List of Dictionaries. Structured to conform to the schema.
"""
return [{_KEY.get(k, k): _value(k, v) for k, v in d.items()} for d in proc_data]
def parse(data: str, raw: bool = False, quiet: bool = False) -> List[Dict]:
"""
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of Dictionaries. Raw or processed structured data.
"""
jc.utils.compatibility(__name__, info.compatible, quiet)
jc.utils.input_type_check(data)
raw_output: List[dict] = []
package: Dict = {}
if jc.utils.has_data(data):
lines = iter(data.splitlines())
for line in lines:
line = line.strip()
if not line:
if package:
raw_output.append(package)
package = {}
continue
key = line[0]
value = line[2:].strip()
assert key not in package
package[key] = value
if package:
raw_output.append(package)
return raw_output if raw else _process(raw_output)

148
jc/parsers/pkg_index_deb.py Normal file
View File

@ -0,0 +1,148 @@
"""jc - JSON Convert Debian Package Index file parser
Usage (cli):
$ cat Packages | jc --pkg-index-deb
Usage (module):
import jc
result = jc.parse('pkg_index_deb', pkg_index_deb_output)
Schema:
[
{
"package": string,
"version": string,
"architecture": string,
"section": string,
"priority": string,
"installed_size": integer,
"maintainer": string,
"description": string,
"homepage": string,
"depends": string,
"conflicts": string,
"replaces": string,
"vcs_git": string,
"sha256": string,
"size": integer,
"vcs_git": string,
"filename": string
}
]
Examples:
$ cat Packages | jc --pkg-index-deb
[
{
"package": "aspnetcore-runtime-2.1",
"version": "2.1.22-1",
"architecture": "amd64",
"section": "devel",
"priority": "standard",
"installed_size": 71081,
"maintainer": "Microsoft <nugetaspnet@microsoft.com>",
"description": "Microsoft ASP.NET Core 2.1.22 Shared Framework",
"homepage": "https://www.asp.net/",
"depends": "libc6 (>= 2.14), dotnet-runtime-2.1 (>= 2.1.22)",
"sha256": "48d4e78a7ceff34105411172f4c3e91a0359b3929d84d26a493...",
"size": 21937036,
"filename": "pool/main/a/aspnetcore-runtime-2.1/aspnetcore-run..."
},
{
"package": "azure-functions-core-tools-4",
"version": "4.0.4590-1",
"architecture": "amd64",
"section": "devel",
"priority": "optional",
"maintainer": "Ahmed ElSayed <ahmels@microsoft.com>",
"description": "Azure Function Core Tools v4",
"homepage": "https://docs.microsoft.com/en-us/azure/azure-func...",
"conflicts": "azure-functions-core-tools-2, azure-functions-co...",
"replaces": "azure-functions-core-tools-2, azure-functions-cor...",
"vcs_git": "https://github.com/Azure/azure-functions-core-tool...",
"sha256": "a2a4f99d6d98ba0a46832570285552f2a93bab06cebbda2afc7...",
"size": 124417844,
"filename": "pool/main/a/azure-functions-core-tools-4/azure-fu..."
}
]
$ cat Packages | jc --pkg-index-deb -r
[
{
"package": "aspnetcore-runtime-2.1",
"version": "2.1.22-1",
"architecture": "amd64",
"section": "devel",
"priority": "standard",
"installed_size": "71081",
"maintainer": "Microsoft <nugetaspnet@microsoft.com>",
"description": "Microsoft ASP.NET Core 2.1.22 Shared Framework",
"homepage": "https://www.asp.net/",
"depends": "libc6 (>= 2.14), dotnet-runtime-2.1 (>= 2.1.22)",
"sha256": "48d4e78a7ceff34105411172f4c3e91a0359b3929d84d26a493...",
"size": "21937036",
"filename": "pool/main/a/aspnetcore-runtime-2.1/aspnetcore-run..."
},
{
"package": "azure-functions-core-tools-4",
"version": "4.0.4590-1",
"architecture": "amd64",
"section": "devel",
"priority": "optional",
"maintainer": "Ahmed ElSayed <ahmels@microsoft.com>",
"description": "Azure Function Core Tools v4",
"homepage": "https://docs.microsoft.com/en-us/azure/azure-func...",
"conflicts": "azure-functions-core-tools-2, azure-functions-co...",
"replaces": "azure-functions-core-tools-2, azure-functions-cor...",
"vcs_git": "https://github.com/Azure/azure-functions-core-tool...",
"sha256": "a2a4f99d6d98ba0a46832570285552f2a93bab06cebbda2afc7...",
"size": "124417844",
"filename": "pool/main/a/azure-functions-core-tools-4/azure-fu..."
}
]
"""
from typing import List
from jc.jc_types import JSONDictType
import jc.parsers.rpm_qi as rpm_qi
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.0'
description = 'Debian Package Index file parser'
author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com'
details = 'Using the rpm-qi parser'
compatible = ['linux', 'darwin', 'cygwin', 'win32', 'aix', 'freebsd']
tags = ['file']
__version__ = info.version
def parse(
data: str,
raw: bool = False,
quiet: bool = False
) -> List[JSONDictType]:
"""
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
List of Dictionaries. Raw or processed structured data.
"""
# This parser is an alias of rpm_qi.py
rpm_qi.info.compatible = ['linux', 'darwin', 'cygwin', 'win32', 'aix', 'freebsd']
rpm_qi.info.tags = ['file']
return rpm_qi.parse(data, raw, quiet)

View File

@ -120,7 +120,7 @@ from jc.exceptions import ParseError
class info(): class info():
"""Provides parser metadata (version, author, etc.)""" """Provides parser metadata (version, author, etc.)"""
version = '1.1' version = '1.2'
description = '`/proc/` file parser' description = '`/proc/` file parser'
author = 'Kelly Brazil' author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com' author_email = 'kellyjonbrazil@gmail.com'
@ -154,6 +154,7 @@ def parse(
if jc.utils.has_data(data): if jc.utils.has_data(data):
# signatures # signatures
buddyinfo_p = re.compile(r'^Node \d+, zone\s+\w+\s+(?:\d+\s+){11}\n') buddyinfo_p = re.compile(r'^Node \d+, zone\s+\w+\s+(?:\d+\s+){11}\n')
cmdline_p = re.compile(r'^BOOT_IMAGE=')
consoles_p = re.compile(r'^\w+\s+[\-WUR]{3} \([ECBpba ]+\)\s+\d+:\d+\n') consoles_p = re.compile(r'^\w+\s+[\-WUR]{3} \([ECBpba ]+\)\s+\d+:\d+\n')
cpuinfo_p = re.compile(r'^processor\t+: \d+.*bogomips\t+: \d+.\d\d\n', re.DOTALL) cpuinfo_p = re.compile(r'^processor\t+: \d+.*bogomips\t+: \d+.\d\d\n', re.DOTALL)
crypto_p = re.compile(r'^name\s+:.*\ndriver\s+:.*\nmodule\s+:.*\n') crypto_p = re.compile(r'^name\s+:.*\ndriver\s+:.*\nmodule\s+:.*\n')
@ -212,6 +213,7 @@ def parse(
procmap = { procmap = {
buddyinfo_p: 'proc_buddyinfo', buddyinfo_p: 'proc_buddyinfo',
cmdline_p: 'proc_cmdline',
consoles_p: 'proc_consoles', consoles_p: 'proc_consoles',
cpuinfo_p: 'proc_cpuinfo', cpuinfo_p: 'proc_cpuinfo',
crypto_p: 'proc_crypto', crypto_p: 'proc_crypto',

138
jc/parsers/proc_cmdline.py Normal file
View File

@ -0,0 +1,138 @@
"""jc - JSON Convert `/proc/cmdline` file parser
Usage (cli):
$ cat /proc/cmdline | jc --proc
or
$ jc /proc/cmdline
or
$ cat /proc/cmdline | jc --proc-cmdline
Usage (module):
import jc
result = jc.parse('proc_cmdline', proc_cmdline_file)
Schema:
{
"<key>": string,
"_options": [
string
]
}
Examples:
$ cat /proc/cmdline | jc --proc -p
{
"BOOT_IMAGE": "clonezilla/live/vmlinuz",
"consoleblank": "0",
"keyboard-options": "grp:ctrl_shift_toggle,lctrl_shift_toggle",
"ethdevice-timeout": "130",
"toram": "filesystem.squashfs",
"boot": "live",
"edd": "on",
"ocs_daemonon": "ssh lighttpd",
"ocs_live_run": "sudo screen /usr/sbin/ocs-sr -g auto -e1 auto -e2 -batch -r -j2 -k -scr -p true restoreparts win7-64 sda1",
"ocs_live_extra_param": "",
"keyboard-layouts": "us,ru",
"ocs_live_batch": "no",
"locales": "ru_RU.UTF-8",
"vga": "788",
"net.ifnames": "0",
"union": "overlay",
"fetch": "http://10.1.1.1/tftpboot/clonezilla/live/filesystem.squashfs",
"ocs_postrun99": "sudo reboot",
"initrd": "clonezilla/live/initrd.img",
"_options": [
"config",
"noswap",
"nolocales",
"nomodeset",
"noprompt",
"nosplash",
"nodmraid",
"components"
]
}
"""
import shlex
from typing import List, Dict
from jc.jc_types import JSONDictType
import jc.utils
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.0'
description = '`/proc/cmdline` file parser'
author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com'
compatible = ['linux']
tags = ['file']
hidden = True
__version__ = info.version
def _process(proc_data: JSONDictType) -> JSONDictType:
"""
Final processing to conform to the schema.
Parameters:
proc_data: (List of Dictionaries) raw structured data to process
Returns:
Dictionary. Structured to conform to the schema.
"""
return proc_data
def parse(
data: str,
raw: bool = False,
quiet: bool = False
) -> JSONDictType:
"""
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
Dictionary. Raw or processed structured data.
"""
jc.utils.compatibility(__name__, info.compatible, quiet)
jc.utils.input_type_check(data)
raw_output: Dict = {}
options: List = []
if jc.utils.has_data(data):
split_line = shlex.split(data)
for item in split_line:
if '=' in item:
key, val = item.split('=', maxsplit=1)
raw_output[key] = val
else:
options.append(item)
if options:
raw_output['_options'] = options
return raw_output if raw else _process(raw_output)

View File

@ -161,7 +161,7 @@ import jc.utils
class info(): class info():
"""Provides parser metadata (version, author, etc.)""" """Provides parser metadata (version, author, etc.)"""
version = '1.6' version = '1.7'
description = '`rpm -qi` command parser' description = '`rpm -qi` command parser'
author = 'Kelly Brazil' author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com' author_email = 'kellyjonbrazil@gmail.com'
@ -185,7 +185,7 @@ def _process(proc_data):
List of Dictionaries. Structured data to conform to the schema. List of Dictionaries. Structured data to conform to the schema.
""" """
int_list = {'epoch', 'size'} int_list = {'epoch', 'size', 'installed_size'}
for entry in proc_data: for entry in proc_data:
for key in entry: for key in entry:
@ -234,7 +234,7 @@ def parse(data, raw=False, quiet=False):
for line in filter(None, data.splitlines()): for line in filter(None, data.splitlines()):
split_line = line.split(': ', maxsplit=1) split_line = line.split(': ', maxsplit=1)
if split_line[0].startswith('Name') and len(split_line) == 2: if (split_line[0].startswith('Name') or split_line[0] == 'Package') and len(split_line) == 2:
this_entry = split_line[1].strip() this_entry = split_line[1].strip()
if this_entry != last_entry: if this_entry != last_entry:
@ -247,7 +247,7 @@ def parse(data, raw=False, quiet=False):
desc_entry = False desc_entry = False
if len(split_line) == 2: if len(split_line) == 2:
entry_obj[split_line[0].strip().lower().replace(' ', '_')] = split_line[1].strip() entry_obj[split_line[0].strip().lower().replace(' ', '_').replace('-', '_')] = split_line[1].strip()
if line.startswith('Description :'): if line.startswith('Description :'):
desc_entry = True desc_entry = True

173
jc/parsers/swapon.py Normal file
View File

@ -0,0 +1,173 @@
"""jc - JSON Convert `swapon` command output parser
Usage (cli):
$ swapon | jc --swapon
or
$ jc swapon
Usage (module):
import jc
result = jc.parse('swapon', swapon_command_output)
Schema:
[
{
"name": string,
"type": string,
"size": integer,
"used": integer,
"priority": integer
}
]
Example:
$ swapon | jc --swapon
[
{
"name": "/swapfile",
"type": "file",
"size": 1073741824,
"used": 0,
"priority": -2
}
]
"""
from enum import Enum
from jc.exceptions import ParseError
import jc.utils
from typing import List, Dict, Union
class info:
"""Provides parser metadata (version, author, etc.)"""
version = "1.0"
description = "`swapon` command parser"
author = "Roey Darwish Dror"
author_email = "roey.ghost@gmail.com"
compatible = ["linux", "freebsd"]
magic_commands = ["swapon"]
tags = ["command"]
__version__ = info.version
_Value = Union[str, int]
_Entry = Dict[str, _Value]
class _Column(Enum):
NAME = "name"
TYPE = "type"
SIZE = "size"
USED = "used"
PRIO = "priority"
LABEL = "label"
UUID = "uuid"
@classmethod
def from_header(cls, header: str) -> "_Column":
if (header == "NAME") or (header == "Filename"):
return cls.NAME
elif (header == "TYPE") or (header == "Type"):
return cls.TYPE
elif (header == "SIZE") or (header == "Size"):
return cls.SIZE
elif (header == "USED") or (header == "Used"):
return cls.USED
elif (header == "PRIO") or (header == "Priority"):
return cls.PRIO
elif header == "LABEL":
return cls.LABEL
elif header == "UUID":
return cls.UUID
else:
raise ParseError(f"Unknown header: {header}")
def _parse_size(size: str) -> int:
power = None
if size[-1] == "B":
power = 0
if size[-1] == "K":
power = 1
elif size[-1] == "M":
power = 2
elif size[-1] == "G":
power = 3
elif size[-1] == "T":
power = 4
multiplier = 1024**power if power is not None else 1024
return (int(size[:-1]) if power is not None else int(size)) * multiplier
def _value(value: str, column: _Column) -> _Value:
if column == _Column.SIZE or column == _Column.USED:
return _parse_size(value)
elif column == _Column.PRIO:
return int(value)
else:
return value
def _process(proc_data: List[Dict]) -> List[Dict]:
"""
Final processing to conform to the schema.
Parameters:
proc_data: (List of Dictionaries) raw structured data to process
Returns:
List of Dictionaries. Structured to conform to the schema.
"""
return proc_data
def parse(data: str, raw: bool = False, quiet: bool = False) -> List[_Entry]:
"""
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
Dictionary. Raw or processed structured data.
"""
jc.utils.compatibility(__name__, info.compatible, quiet)
jc.utils.input_type_check(data)
raw_output: List[dict] = []
if jc.utils.has_data(data):
lines = iter(data.splitlines())
headers = next(lines)
columns = headers.split()
for each_line in lines:
line = each_line.split()
diff = len(columns) - len(line)
if not 0 <= diff <= 2:
raise ParseError(
f"Number of columns ({len(line)}) in line does not match number of headers ({len(columns)})"
)
document: _Entry = {}
for each_column, value in zip(columns, line):
column = _Column.from_header(each_column)
document[column.value] = _value(value, column)
raw_output.append(document)
return raw_output if raw else _process(raw_output)

300
jc/parsers/tune2fs.py Normal file
View File

@ -0,0 +1,300 @@
"""jc - JSON Convert `tune2fs -l` command output parser
Usage (cli):
$ tune2fs -l /dev/xvda4 | jc --tune2fs
or
$ jc tune2fs -l /dev/xvda4
Usage (module):
import jc
result = jc.parse('tune2fs', tune2fs_command_output)
Schema:
{
"version": string,
"filesystem_volume_name": string,
"last_mounted_on": string,
"filesystem_uuid": string,
"filesystem_magic_number": string,
"filesystem_revision_number": string,
"filesystem_features": [
string
],
"filesystem_flags": string,
"default_mount_options": string,
"filesystem_state": string,
"errors_behavior": string,
"filesystem_os_type": string,
"inode_count": integer,
"block_count": integer,
"reserved_block_count": integer,
"overhead_clusters": integer,
"free_blocks": integer,
"free_inodes": integer,
"first_block": integer,
"block_size": integer,
"fragment_size": integer,
"group_descriptor_size": integer,
"reserved_gdt_blocks": integer,
"blocks_per_group": integer,
"fragments_per_group": integer,
"inodes_per_group": integer,
"inode_blocks_per_group": integer,
"flex_block_group_size": integer,
"filesystem_created": string,
"filesystem_created_epoch": integer,
"filesystem_created_epoch_utc": integer,
"last_mount_time": string,
"last_mount_time_epoch": integer,
"last_mount_time_epoch_utc": integer,
"last_write_time": string,
"last_write_time_epoch": integer,
"last_write_time_epoch_utc": integer,
"mount_count": integer,
"maximum_mount_count": integer,
"last_checked": string,
"last_checked_epoch": integer,
"last_checked_epoch_utc": integer,
"check_interval": string,
"lifetime_writes": string,
"reserved_blocks_uid": string,
"reserved_blocks_gid": string,
"first_inode": integer,
"inode_size": integer,
"required_extra_isize": integer,
"desired_extra_isize": integer,
"journal_inode": integer,
"default_directory_hash": string,
"directory_hash_seed": string,
"journal_backup": string,
"checksum_type": string,
"checksum": string
}
Examples:
$ tune2fs | jc --tune2fs -p
{
"version": "1.46.2 (28-Feb-2021)",
"filesystem_volume_name": "<none>",
"last_mounted_on": "/home",
"filesystem_uuid": "5fb78e1a-b214-44e2-a309-8e35116d8dd6",
"filesystem_magic_number": "0xEF53",
"filesystem_revision_number": "1 (dynamic)",
"filesystem_features": [
"has_journal",
"ext_attr",
"resize_inode",
"dir_index",
"filetype",
"needs_recovery",
"extent",
"64bit",
"flex_bg",
"sparse_super",
"large_file",
"huge_file",
"dir_nlink",
"extra_isize",
"metadata_csum"
],
"filesystem_flags": "signed_directory_hash",
"default_mount_options": "user_xattr acl",
"filesystem_state": "clean",
"errors_behavior": "Continue",
"filesystem_os_type": "Linux",
"inode_count": 3932160,
"block_count": 15728640,
"reserved_block_count": 786432,
"free_blocks": 15198453,
"free_inodes": 3864620,
"first_block": 0,
"block_size": 4096,
"fragment_size": 4096,
"group_descriptor_size": 64,
"reserved_gdt_blocks": 1024,
"blocks_per_group": 32768,
"fragments_per_group": 32768,
"inodes_per_group": 8192,
"inode_blocks_per_group": 512,
"flex_block_group_size": 16,
"filesystem_created": "Mon Apr 6 15:10:37 2020",
"last_mount_time": "Mon Sep 19 15:16:20 2022",
"last_write_time": "Mon Sep 19 15:16:20 2022",
"mount_count": 14,
"maximum_mount_count": -1,
"last_checked": "Fri Apr 8 15:24:22 2022",
"check_interval": "0 (<none>)",
"lifetime_writes": "203 GB",
"reserved_blocks_uid": "0 (user root)",
"reserved_blocks_gid": "0 (group root)",
"first_inode": 11,
"inode_size": 256,
"required_extra_isize": 32,
"desired_extra_isize": 32,
"journal_inode": 8,
"default_directory_hash": "half_md4",
"directory_hash_seed": "67d5358d-723d-4ce3-b3c0-30ddb433ad9e",
"journal_backup": "inode blocks",
"checksum_type": "crc32c",
"checksum": "0x7809afff",
"filesystem_created_epoch": 1586211037,
"filesystem_created_epoch_utc": null,
"last_mount_time_epoch": 1663625780,
"last_mount_time_epoch_utc": null,
"last_write_time_epoch": 1663625780,
"last_write_time_epoch_utc": null,
"last_checked_epoch": 1649456662,
"last_checked_epoch_utc": null
}
$ tune2fs | jc --tune2fs -p -r
{
"version": "1.46.2 (28-Feb-2021)",
"filesystem_volume_name": "<none>",
"last_mounted_on": "/home",
"filesystem_uuid": "5fb78e1a-b214-44e2-a309-8e35116d8dd6",
"filesystem_magic_number": "0xEF53",
"filesystem_revision_number": "1 (dynamic)",
"filesystem_features": "has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum",
"filesystem_flags": "signed_directory_hash",
"default_mount_options": "user_xattr acl",
"filesystem_state": "clean",
"errors_behavior": "Continue",
"filesystem_os_type": "Linux",
"inode_count": "3932160",
"block_count": "15728640",
"reserved_block_count": "786432",
"free_blocks": "15198453",
"free_inodes": "3864620",
"first_block": "0",
"block_size": "4096",
"fragment_size": "4096",
"group_descriptor_size": "64",
"reserved_gdt_blocks": "1024",
"blocks_per_group": "32768",
"fragments_per_group": "32768",
"inodes_per_group": "8192",
"inode_blocks_per_group": "512",
"flex_block_group_size": "16",
"filesystem_created": "Mon Apr 6 15:10:37 2020",
"last_mount_time": "Mon Sep 19 15:16:20 2022",
"last_write_time": "Mon Sep 19 15:16:20 2022",
"mount_count": "14",
"maximum_mount_count": "-1",
"last_checked": "Fri Apr 8 15:24:22 2022",
"check_interval": "0 (<none>)",
"lifetime_writes": "203 GB",
"reserved_blocks_uid": "0 (user root)",
"reserved_blocks_gid": "0 (group root)",
"first_inode": "11",
"inode_size": "256",
"required_extra_isize": "32",
"desired_extra_isize": "32",
"journal_inode": "8",
"default_directory_hash": "half_md4",
"directory_hash_seed": "67d5358d-723d-4ce3-b3c0-30ddb433ad9e",
"journal_backup": "inode blocks",
"checksum_type": "crc32c",
"checksum": "0x7809afff"
}
"""
from typing import Dict
from jc.jc_types import JSONDictType
import jc.utils
class info():
"""Provides parser metadata (version, author, etc.)"""
version = '1.0'
description = '`tune2fs -l` command parser'
author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com'
compatible = ['linux']
tags = ['command']
magic_commands = ['tune2fs -l']
__version__ = info.version
def _process(proc_data: JSONDictType) -> JSONDictType:
"""
Final processing to conform to the schema.
Parameters:
proc_data: (Dictionary) raw structured data to process
Returns:
Dictionary. Structured to conform to the schema.
"""
int_list = {'inode_count', 'block_count', 'reserved_block_count', 'free_blocks',
'free_inodes', 'first_block', 'block_size', 'fragment_size',
'group_descriptor_size', 'reserved_gdt_blocks', 'blocks_per_group',
'fragments_per_group', 'inodes_per_group', 'inode_blocks_per_group',
'flex_block_group_size', 'mount_count', 'maximum_mount_count',
'first_inode', 'inode_size', 'required_extra_isize', 'desired_extra_isize',
'journal_inode', 'overhead_clusters'}
datetime_list = {'filesystem_created', 'last_mount_time', 'last_write_time', 'last_checked'}
for key in proc_data:
if key in int_list:
proc_data[key] = jc.utils.convert_to_int(proc_data[key])
for key in proc_data.copy():
if key in datetime_list:
dt = jc.utils.timestamp(proc_data[key], (1000,))
proc_data[key + '_epoch'] = dt.naive
proc_data[key + '_epoch_utc'] = dt.utc
if 'filesystem_features' in proc_data:
proc_data['filesystem_features'] = proc_data['filesystem_features'].split()
return proc_data
def parse(
data: str,
raw: bool = False,
quiet: bool = False
) -> JSONDictType:
"""
Main text parsing function
Parameters:
data: (string) text data to parse
raw: (boolean) unprocessed output if True
quiet: (boolean) suppress warning messages if True
Returns:
Dictionary. Raw or processed structured data.
"""
jc.utils.compatibility(__name__, info.compatible, quiet)
jc.utils.input_type_check(data)
raw_output: Dict = {}
if jc.utils.has_data(data):
for line in filter(None, data.splitlines()):
if line.startswith('tune2fs '):
raw_output['version'] = line.split(maxsplit=1)[1]
continue
linesplit = line.split(':', maxsplit=1)
key = linesplit[0].lower().replace(' ', '_').replace('#', 'number')
val = linesplit[1].strip()
raw_output[key] = val
return raw_output if raw else _process(raw_output)

View File

@ -121,12 +121,16 @@ Examples:
} }
] ]
""" """
import re
import jc.utils import jc.utils
PROCS_HEADER_RE = re.compile(r'^-*procs-* ')
DISK_HEADER_RE = re.compile(r'^-*disk-* ')
class info(): class info():
"""Provides parser metadata (version, author, etc.)""" """Provides parser metadata (version, author, etc.)"""
version = '1.3' version = '1.4'
description = '`vmstat` command parser' description = '`vmstat` command parser'
author = 'Kelly Brazil' author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com' author_email = 'kellyjonbrazil@gmail.com'
@ -203,18 +207,18 @@ def parse(data, raw=False, quiet=False):
for line in filter(None, data.splitlines()): for line in filter(None, data.splitlines()):
# detect output type # detect output type
if not procs and not disk and line.startswith('procs'): if not procs and not disk and PROCS_HEADER_RE.match(line):
procs = True procs = True
tstamp = '-timestamp-' in line tstamp = '-timestamp-' in line
continue continue
if not procs and not disk and line.startswith('disk'): if not procs and not disk and DISK_HEADER_RE.match(line):
disk = True disk = True
tstamp = '-timestamp-' in line tstamp = '-timestamp-' in line
continue continue
# skip header rows # skip header rows
if (procs or disk) and (line.startswith('procs') or line.startswith('disk')): if (procs or disk) and (PROCS_HEADER_RE.match(line) or DISK_HEADER_RE.match(line)):
continue continue
if 'swpd' in line and 'free' in line and 'buff' in line and 'cache' in line: if 'swpd' in line and 'free' in line and 'buff' in line and 'cache' in line:

View File

@ -91,16 +91,20 @@ Examples:
{"runnable_procs":"2","uninterruptible_sleeping_procs":"0","virtua...} {"runnable_procs":"2","uninterruptible_sleeping_procs":"0","virtua...}
... ...
""" """
import re
import jc.utils import jc.utils
from jc.streaming import ( from jc.streaming import (
add_jc_meta, streaming_input_type_check, streaming_line_input_type_check, raise_or_yield add_jc_meta, streaming_input_type_check, streaming_line_input_type_check, raise_or_yield
) )
from jc.exceptions import ParseError from jc.exceptions import ParseError
PROCS_HEADER_RE = re.compile(r'^-*procs-* ')
DISK_HEADER_RE = re.compile(r'^-*disk-* ')
class info(): class info():
"""Provides parser metadata (version, author, etc.)""" """Provides parser metadata (version, author, etc.)"""
version = '1.2' version = '1.3'
description = '`vmstat` command streaming parser' description = '`vmstat` command streaming parser'
author = 'Kelly Brazil' author = 'Kelly Brazil'
author_email = 'kellyjonbrazil@gmail.com' author_email = 'kellyjonbrazil@gmail.com'
@ -182,18 +186,18 @@ def parse(data, raw=False, quiet=False, ignore_exceptions=False):
continue continue
# detect output type # detect output type
if not procs and not disk and line.startswith('procs'): if not procs and not disk and PROCS_HEADER_RE.match(line):
procs = True procs = True
tstamp = '-timestamp-' in line tstamp = '-timestamp-' in line
continue continue
if not procs and not disk and line.startswith('disk'): if not procs and not disk and DISK_HEADER_RE.match(line):
disk = True disk = True
tstamp = '-timestamp-' in line tstamp = '-timestamp-' in line
continue continue
# skip header rows # skip header rows
if (procs or disk) and (line.startswith('procs') or line.startswith('disk')): if (procs or disk) and (PROCS_HEADER_RE.match(line) or DISK_HEADER_RE.match(line)):
continue continue
if 'swpd' in line and 'free' in line and 'buff' in line and 'cache' in line: if 'swpd' in line and 'free' in line and 'buff' in line and 'cache' in line:

View File

@ -26,7 +26,8 @@ Schema:
"current_height": integer, "current_height": integer,
"maximum_width": integer, "maximum_width": integer,
"maximum_height": integer, "maximum_height": integer,
"devices": { "devices": [
{
"modes": [ "modes": [
{ {
"resolution_width": integer, "resolution_width": integer,
@ -41,7 +42,8 @@ Schema:
] ]
} }
] ]
}, }
],
"is_connected": boolean, "is_connected": boolean,
"is_primary": boolean, "is_primary": boolean,
"device_name": string, "device_name": string,
@ -57,7 +59,7 @@ Schema:
"rotation": string, "rotation": string,
"reflection": string "reflection": string
} }
], ]
} }
Examples: Examples:
@ -73,7 +75,8 @@ Examples:
"current_height": 1080, "current_height": 1080,
"maximum_width": 32767, "maximum_width": 32767,
"maximum_height": 32767, "maximum_height": 32767,
"devices": { "devices": [
{
"modes": [ "modes": [
{ {
"resolution_width": 1920, "resolution_width": 1920,
@ -117,6 +120,7 @@ Examples:
"rotation": "normal", "rotation": "normal",
"reflection": "normal" "reflection": "normal"
} }
]
} }
] ]
} }
@ -132,7 +136,8 @@ Examples:
"current_height": 1080, "current_height": 1080,
"maximum_width": 32767, "maximum_width": 32767,
"maximum_height": 32767, "maximum_height": 32767,
"devices": { "devices": [
{
"modes": [ "modes": [
{ {
"resolution_width": 1920, "resolution_width": 1920,
@ -179,6 +184,7 @@ Examples:
"rotation": "normal", "rotation": "normal",
"reflection": "normal" "reflection": "normal"
} }
]
} }
] ]
} }
@ -192,8 +198,7 @@ from jc.parsers.pyedid.helpers.edid_helper import EdidHelper
class info: class info:
"""Provides parser metadata (version, author, etc.)""" """Provides parser metadata (version, author, etc.)"""
version = "1.4"
version = "1.3"
description = "`xrandr` command parser" description = "`xrandr` command parser"
author = "Kevin Lyter" author = "Kevin Lyter"
author_email = "code (at) lyterk.com" author_email = "code (at) lyterk.com"
@ -205,6 +210,35 @@ class info:
__version__ = info.version __version__ = info.version
# keep parsing state so we know which parsers have already tried the line
# Structure is:
# {
# <line_string>: [
# <parser_string>
# ]
# }
#
# Where <line_string> is the xrandr output line to be checked and <parser_string>
# can contain "screen", "device", or "model"
parse_state: Dict[str, List] = {}
def _was_parsed(line: str, parser: str) -> bool:
"""
Check if entered parser has already parsed. If so return True.
If not, return false and add the parser to the list for the line entry.
"""
if line in parse_state:
if parser in parse_state[line]:
return True
parse_state[line].append(parser)
return False
parse_state[line] = [parser]
return False
try: try:
from typing import TypedDict from typing import TypedDict
@ -291,6 +325,10 @@ _screen_pattern = (
def _parse_screen(next_lines: List[str]) -> Optional[Screen]: def _parse_screen(next_lines: List[str]) -> Optional[Screen]:
next_line = next_lines.pop() next_line = next_lines.pop()
if _was_parsed(next_line, 'screen'):
return None
result = re.match(_screen_pattern, next_line) result = re.match(_screen_pattern, next_line)
if not result: if not result:
next_lines.append(next_line) next_lines.append(next_line)
@ -323,8 +361,8 @@ _device_pattern = (
+ r"\+(?P<offset_width>\d+)\+(?P<offset_height>\d+))? " + r"\+(?P<offset_width>\d+)\+(?P<offset_height>\d+))? "
+ r"(?P<rotation>(normal|right|left|inverted)?) ?" + r"(?P<rotation>(normal|right|left|inverted)?) ?"
+ r"(?P<reflection>(X axis|Y axis|X and Y axis)?) ?" + r"(?P<reflection>(X axis|Y axis|X and Y axis)?) ?"
+ r"\(normal left inverted right x axis y axis\)" + r"(\(normal left inverted right x axis y axis\))?"
+ r"( ((?P<dimension_width>\d+)mm x (?P<dimension_height>\d+)mm)?)?" + r"( ?((?P<dimension_width>\d+)mm x (?P<dimension_height>\d+)mm)?)?"
) )
@ -333,6 +371,10 @@ def _parse_device(next_lines: List[str], quiet: bool = False) -> Optional[Device
return None return None
next_line = next_lines.pop() next_line = next_lines.pop()
if _was_parsed(next_line, 'device'):
return None
result = re.match(_device_pattern, next_line) result = re.match(_device_pattern, next_line)
if not result: if not result:
next_lines.append(next_line) next_lines.append(next_line)
@ -402,6 +444,10 @@ def _parse_model(next_lines: List[str], quiet: bool = False) -> Optional[Model]:
return None return None
next_line = next_lines.pop() next_line = next_lines.pop()
if _was_parsed(next_line, 'model'):
return None
if not re.match(_edid_head_pattern, next_line): if not re.match(_edid_head_pattern, next_line):
next_lines.append(next_line) next_lines.append(next_line)
return None return None
@ -438,6 +484,7 @@ _frequencies_pattern = r"(((?P<frequency>\d+\.\d+)(?P<star>\*| |)(?P<plus>\+?)?)
def _parse_mode(line: str) -> Optional[Mode]: def _parse_mode(line: str) -> Optional[Mode]:
result = re.match(_mode_pattern, line) result = re.match(_mode_pattern, line)
frequencies: List[Frequency] = [] frequencies: List[Frequency] = []
if not result: if not result:
return None return None
@ -490,9 +537,10 @@ def parse(data: str, raw: bool = False, quiet: bool = False) -> Dict:
linedata = data.splitlines() linedata = data.splitlines()
linedata.reverse() # For popping linedata.reverse() # For popping
result: Response = {"screens": []} result: Dict = {}
if jc.utils.has_data(data): if jc.utils.has_data(data):
result = {"screens": []}
while linedata: while linedata:
screen = _parse_screen(linedata) screen = _parse_screen(linedata)
if screen: if screen:

View File

@ -3,6 +3,8 @@ import sys
import re import re
import locale import locale
import shutil import shutil
from collections import namedtuple
from numbers import Number
from datetime import datetime, timezone from datetime import datetime, timezone
from textwrap import TextWrapper from textwrap import TextWrapper
from functools import lru_cache from functools import lru_cache
@ -274,6 +276,117 @@ def convert_to_bool(value: object) -> bool:
return False return False
# convert_size_to_int from https://github.com/xolox/python-humanfriendly
# Copyright (c) 2021 Peter Odding
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
def convert_size_to_int(size: str, binary: bool = False) -> Optional[int]:
"""
Parse a human readable data size and return the number of bytes.
Parameters:
size: (string) The human readable file size to parse.
binary: (boolean) `True` to use binary multiples of bytes
(base-2) for ambiguous unit symbols and names,
`False` to use decimal multiples of bytes (base-10).
Returns:
integer/None Integer if successful conversion, otherwise None
This function knows how to parse sizes in bytes, kilobytes, megabytes,
gigabytes, terabytes and petabytes. Some examples:
>>> convert_size_to_int('42')
42
>>> convert_size_to_int('13b')
13
>>> convert_size_to_int('5 bytes')
5
>>> convert_size_to_int('1 KB')
1000
>>> convert_size_to_int('1 kilobyte')
1000
>>> convert_size_to_int('1 KiB')
1024
>>> convert_size_to_int('1 KB', binary=True)
1024
>>> convert_size_to_int('1.5 GB')
1500000000
>>> convert_size_to_int('1.5 GB', binary=True)
1610612736
"""
def tokenize(text: str) -> List[str]:
tokenized_input: List = []
for token in re.split(r'(\d+(?:\.\d+)?)', text):
token = token.strip()
if re.match(r'\d+\.\d+', token):
tokenized_input.append(float(token))
elif token.isdigit():
tokenized_input.append(int(token))
elif token:
tokenized_input.append(token)
return tokenized_input
SizeUnit = namedtuple('SizeUnit', 'divider, symbol, name')
CombinedUnit = namedtuple('CombinedUnit', 'decimal, binary')
disk_size_units = (
CombinedUnit(SizeUnit(1000**1, 'KB', 'kilobyte'), SizeUnit(1024**1, 'KiB', 'kibibyte')),
CombinedUnit(SizeUnit(1000**2, 'MB', 'megabyte'), SizeUnit(1024**2, 'MiB', 'mebibyte')),
CombinedUnit(SizeUnit(1000**3, 'GB', 'gigabyte'), SizeUnit(1024**3, 'GiB', 'gibibyte')),
CombinedUnit(SizeUnit(1000**4, 'TB', 'terabyte'), SizeUnit(1024**4, 'TiB', 'tebibyte')),
CombinedUnit(SizeUnit(1000**5, 'PB', 'petabyte'), SizeUnit(1024**5, 'PiB', 'pebibyte')),
CombinedUnit(SizeUnit(1000**6, 'EB', 'exabyte'), SizeUnit(1024**6, 'EiB', 'exbibyte')),
CombinedUnit(SizeUnit(1000**7, 'ZB', 'zettabyte'), SizeUnit(1024**7, 'ZiB', 'zebibyte')),
CombinedUnit(SizeUnit(1000**8, 'YB', 'yottabyte'), SizeUnit(1024**8, 'YiB', 'yobibyte')),
)
tokens = tokenize(size)
if tokens and isinstance(tokens[0], Number):
# Get the normalized unit (if any) from the tokenized input.
normalized_unit = tokens[1].lower() if len(tokens) == 2 and isinstance(tokens[1], str) else ''
# If the input contains only a number, it's assumed to be the number of
# bytes. The second token can also explicitly reference the unit bytes.
if len(tokens) == 1 or normalized_unit.startswith('b'):
return int(tokens[0])
# Otherwise we expect two tokens: A number and a unit.
if normalized_unit:
# Convert plural units to singular units, for details:
# https://github.com/xolox/python-humanfriendly/issues/26
normalized_unit = normalized_unit.rstrip('s')
for unit in disk_size_units:
# First we check for unambiguous symbols (KiB, MiB, GiB, etc)
# and names (kibibyte, mebibyte, gibibyte, etc) because their
# handling is always the same.
if normalized_unit in (unit.binary.symbol.lower(), unit.binary.name.lower()):
return int(tokens[0] * unit.binary.divider)
# Now we will deal with ambiguous prefixes (K, M, G, etc),
# symbols (KB, MB, GB, etc) and names (kilobyte, megabyte,
# gigabyte, etc) according to the caller's preference.
if (normalized_unit in (unit.decimal.symbol.lower(), unit.decimal.name.lower()) or
normalized_unit.startswith(unit.decimal.symbol[0].lower())):
return int(tokens[0] * (unit.binary.divider if binary else unit.decimal.divider))
# We failed to parse the size specification.
return None
def input_type_check(data: object) -> None: def input_type_check(data: object) -> None:
"""Ensure input data is a string. Raises `TypeError` if not.""" """Ensure input data is a string. Raises `TypeError` if not."""
if not isinstance(data, str): if not isinstance(data, str):

View File

@ -1,4 +1,4 @@
.TH jc 1 2023-10-23 1.23.6 "JSON Convert" .TH jc 1 2023-12-17 1.24.0 "JSON Convert"
.SH NAME .SH NAME
\fBjc\fP \- JSON Convert JSONifies the output of many CLI tools, file-types, \fBjc\fP \- JSON Convert JSONifies the output of many CLI tools, file-types,
and strings and strings
@ -147,6 +147,11 @@ CSV file streaming parser
\fB--datetime-iso\fP \fB--datetime-iso\fP
ISO 8601 Datetime string parser ISO 8601 Datetime string parser
.TP
.B
\fB--debconf-show\fP
`debconf-show` command parser
.TP .TP
.B .B
\fB--df\fP \fB--df\fP
@ -322,11 +327,6 @@ IPv4 and IPv6 Address string parser
\fB--ip-route\fP \fB--ip-route\fP
`ip route` command parser `ip route` command parser
.TP
.B
\fB--iso-datetime\fP
Deprecated - please use datetime-iso
.TP .TP
.B .B
\fB--iw-scan\fP \fB--iw-scan\fP
@ -512,6 +512,16 @@ PostgreSQL password file parser
\fB--pip-show\fP \fB--pip-show\fP
`pip show` command parser `pip show` command parser
.TP
.B
\fB--pkg-index-apk\fP
Alpine Linux Package Index file parser
.TP
.B
\fB--pkg-index-deb\fP
Debian Package Index file parser
.TP .TP
.B .B
\fB--plist\fP \fB--plist\fP
@ -532,6 +542,11 @@ PLIST file parser
\fB--proc-buddyinfo\fP \fB--proc-buddyinfo\fP
`/proc/buddyinfo` file parser `/proc/buddyinfo` file parser
.TP
.B
\fB--proc-cmdline\fP
`/proc/cmdline` file parser
.TP .TP
.B .B
\fB--proc-consoles\fP \fB--proc-consoles\fP
@ -852,6 +867,11 @@ SRT file parser
\fB--stat-s\fP \fB--stat-s\fP
`stat` command streaming parser `stat` command streaming parser
.TP
.B
\fB--swapon\fP
`swapon` command parser
.TP .TP
.B .B
\fB--sysctl\fP \fB--sysctl\fP
@ -942,6 +962,11 @@ TOML file parser
\fB--traceroute\fP \fB--traceroute\fP
`traceroute` and `traceroute6` command parser `traceroute` and `traceroute6` command parser
.TP
.B
\fB--tune2fs\fP
`tune2fs -l` command parser
.TP .TP
.B .B
\fB--udevadm\fP \fB--udevadm\fP

View File

@ -7,7 +7,6 @@ from jinja2 import Environment, FileSystemLoader
file_loader = FileSystemLoader('templates') file_loader = FileSystemLoader('templates')
env = Environment(loader=file_loader) env = Environment(loader=file_loader)
template = env.get_template('readme_template') template = env.get_template('readme_template')
# output = template.render(jc=jc.cli.about_jc())
output = template.render(parsers=jc.lib.all_parser_info(), output = template.render(parsers=jc.lib.all_parser_info(),
info=jc.cli.JcCli.about_jc()) info=jc.cli.JcCli.about_jc())

View File

@ -5,7 +5,7 @@ with open('README.md', 'r') as f:
setuptools.setup( setuptools.setup(
name='jc', name='jc',
version='1.23.6', version='1.24.0',
author='Kelly Brazil', author='Kelly Brazil',
author_email='kellyjonbrazil@gmail.com', author_email='kellyjonbrazil@gmail.com',
description='Converts the output of popular command-line tools and file-types to JSON.', description='Converts the output of popular command-line tools and file-types to JSON.',

View File

@ -120,6 +120,7 @@ pip3 install jc
| NixOS linux | `nix-env -iA nixpkgs.jc` or `nix-env -iA nixos.jc` | | NixOS linux | `nix-env -iA nixpkgs.jc` or `nix-env -iA nixos.jc` |
| Guix System linux | `guix install jc` | | Guix System linux | `guix install jc` |
| Gentoo Linux | `emerge dev-python/jc` | | Gentoo Linux | `emerge dev-python/jc` |
| Photon linux | `tdnf install jc` |
| macOS | `brew install jc` | | macOS | `brew install jc` |
| FreeBSD | `portsnap fetch update && cd /usr/ports/textproc/py-jc && make install clean` | | FreeBSD | `portsnap fetch update && cd /usr/ports/textproc/py-jc && make install clean` |
| Ansible filter plugin | `ansible-galaxy collection install community.general` | | Ansible filter plugin | `ansible-galaxy collection install community.general` |

View File

@ -0,0 +1 @@
[{"asked":true,"packagename":"onlyoffice","name":"jwt_secret","value":"aL8ei2iereuzee7cuJ6Cahjah1ixee2ah"},{"asked":false,"packagename":"onlyoffice","name":"db_pwd","value":"(password omitted)"},{"asked":true,"packagename":"onlyoffice","name":"rabbitmq_pwd","value":"(password omitted)"},{"asked":true,"packagename":"onlyoffice","name":"ds_port","value":"80"},{"asked":true,"packagename":"onlyoffice","name":"docservice_port","value":"8000"},{"asked":true,"packagename":"onlyoffice","name":"jwt_enabled","value":"true"},{"asked":true,"packagename":"onlyoffice","name":"remove_db","value":"false"},{"asked":true,"packagename":"onlyoffice","name":"jwt_header","value":"Authorization"},{"asked":true,"packagename":"onlyoffice","name":"db_port","value":"5432"},{"asked":true,"packagename":"onlyoffice","name":"rabbitmq_user","value":"onlyoffice"},{"asked":true,"packagename":"onlyoffice","name":"example_port","value":"3000"},{"asked":true,"packagename":"onlyoffice","name":"db_name","value":"onlyoffice"},{"asked":true,"packagename":"onlyoffice","name":"db_host","value":"localhost"},{"asked":true,"packagename":"onlyoffice","name":"db_type","value":"postgres"},{"asked":true,"packagename":"onlyoffice","name":"rabbitmq_host","value":"localhost"},{"asked":true,"packagename":"onlyoffice","name":"db_user","value":"onlyoffice"},{"asked":true,"packagename":"onlyoffice","name":"rabbitmq_proto","value":"amqp"},{"asked":true,"packagename":"onlyoffice","name":"cluster_mode","value":"false"}]

18
tests/fixtures/generic/debconf-show.out vendored Normal file
View File

@ -0,0 +1,18 @@
* onlyoffice/jwt-secret: aL8ei2iereuzee7cuJ6Cahjah1ixee2ah
onlyoffice/db-pwd: (password omitted)
* onlyoffice/rabbitmq-pwd: (password omitted)
* onlyoffice/ds-port: 80
* onlyoffice/docservice-port: 8000
* onlyoffice/jwt-enabled: true
* onlyoffice/remove-db: false
* onlyoffice/jwt-header: Authorization
* onlyoffice/db-port: 5432
* onlyoffice/rabbitmq-user: onlyoffice
* onlyoffice/example-port: 3000
* onlyoffice/db-name: onlyoffice
* onlyoffice/db-host: localhost
* onlyoffice/db-type: postgres
* onlyoffice/rabbitmq-host: localhost
* onlyoffice/db-user: onlyoffice
* onlyoffice/rabbitmq-proto: amqp
* onlyoffice/cluster-mode: false

View File

@ -0,0 +1 @@
[{"chain":"INPUT","rules":[{"target":null,"prot":"udp","opt":null,"source":"anywhere","destination":"anywhere"}]}]

View File

@ -0,0 +1,4 @@
Chain INPUT (policy ACCEPT)
target prot opt source destination
udp -- anywhere anywhere

View File

@ -0,0 +1 @@
[{"filesystem":"/dev/sda1","mount_point":"/media/ingo/Ubuntu 22.04.3 LTS amd64","type":"iso9660","options":["ro","nosuid","nodev","relatime","nojoliet","check=s","map=n","blocksize=2048","uid=1000","gid=1000","dmode=500","fmode=400","iocharset=utf8","uhelper=udisks2"]}]

View File

@ -0,0 +1 @@
/dev/sda1 on /media/ingo/Ubuntu 22.04.3 LTS amd64 type iso9660 (ro,nosuid,nodev,relatime,nojoliet,check=s,map=n,blocksize=2048,uid=1000,gid=1000,dmode=500,fmode=400,iocharset=utf8,uhelper=udisks2)

View File

@ -0,0 +1 @@
[{"C":"Q1znBl9k+RKgY6gl5Eg3iz73KZbLY=","P":"yasm","V":"1.3.0-r4","A":"x86_64","S":"772109","I":"1753088","T":"A rewrite of NASM to allow for multiple syntax supported (NASM, TASM, GAS, etc.)","U":"http://www.tortall.net/projects/yasm/","L":"BSD-2-Clause","o":"yasm","m":"Natanael Copa <ncopa@alpinelinux.org>","t":"1681228881","c":"84a227baf001b6e0208e3352b294e4d7a40e93de","D":"so:libc.musl-x86_64.so.1","p":"cmd:vsyasm=1.3.0-r4 cmd:yasm=1.3.0-r4 cmd:ytasm=1.3.0-r4"},{"C":"Q1D3O+vigqMNGhFeVW1bT5Z9mZEf8=","P":"yasm-dev","V":"1.3.0-r4","A":"x86_64","S":"449076","I":"1773568","T":"A rewrite of NASM to allow for multiple syntax supported (NASM, TASM, GAS, etc.) (development files)","U":"http://www.tortall.net/projects/yasm/","L":"BSD-2-Clause","o":"yasm","m":"Natanael Copa <ncopa@alpinelinux.org>","t":"1681228881","c":"84a227baf001b6e0208e3352b294e4d7a40e93de"}]

View File

@ -0,0 +1 @@
[{"checksum":"Q1znBl9k+RKgY6gl5Eg3iz73KZbLY=","package":"yasm","version":"1.3.0-r4","architecture":"x86_64","package_size":772109,"installed_size":1753088,"description":"A rewrite of NASM to allow for multiple syntax supported (NASM, TASM, GAS, etc.)","url":"http://www.tortall.net/projects/yasm/","license":"BSD-2-Clause","origin":"yasm","maintainer":{"name":"Natanael Copa","email":"ncopa@alpinelinux.org"},"build_time":1681228881,"commit":"84a227baf001b6e0208e3352b294e4d7a40e93de","dependencies":["so:libc.musl-x86_64.so.1"],"provides":["cmd:vsyasm=1.3.0-r4","cmd:yasm=1.3.0-r4","cmd:ytasm=1.3.0-r4"]},{"checksum":"Q1D3O+vigqMNGhFeVW1bT5Z9mZEf8=","package":"yasm-dev","version":"1.3.0-r4","architecture":"x86_64","package_size":449076,"installed_size":1773568,"description":"A rewrite of NASM to allow for multiple syntax supported (NASM, TASM, GAS, etc.) (development files)","url":"http://www.tortall.net/projects/yasm/","license":"BSD-2-Clause","origin":"yasm","maintainer":{"name":"Natanael Copa","email":"ncopa@alpinelinux.org"},"build_time":1681228881,"commit":"84a227baf001b6e0208e3352b294e4d7a40e93de"}]

View File

@ -0,0 +1,29 @@
C:Q1znBl9k+RKgY6gl5Eg3iz73KZbLY=
P:yasm
V:1.3.0-r4
A:x86_64
S:772109
I:1753088
T:A rewrite of NASM to allow for multiple syntax supported (NASM, TASM, GAS, etc.)
U:http://www.tortall.net/projects/yasm/
L:BSD-2-Clause
o:yasm
m:Natanael Copa <ncopa@alpinelinux.org>
t:1681228881
c:84a227baf001b6e0208e3352b294e4d7a40e93de
D:so:libc.musl-x86_64.so.1
p:cmd:vsyasm=1.3.0-r4 cmd:yasm=1.3.0-r4 cmd:ytasm=1.3.0-r4
C:Q1D3O+vigqMNGhFeVW1bT5Z9mZEf8=
P:yasm-dev
V:1.3.0-r4
A:x86_64
S:449076
I:1773568
T:A rewrite of NASM to allow for multiple syntax supported (NASM, TASM, GAS, etc.) (development files)
U:http://www.tortall.net/projects/yasm/
L:BSD-2-Clause
o:yasm
m:Natanael Copa <ncopa@alpinelinux.org>
t:1681228881
c:84a227baf001b6e0208e3352b294e4d7a40e93de

File diff suppressed because one or more lines are too long

29735
tests/fixtures/generic/pkg-index-deb.out vendored Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1 @@
[{"name":"/swap.img","type":"file","size":2147483648,"used":2097152,"priority":-2,"uuid":"0918d27e-3907-471d-abb8-45fa49ae059c"}]

View File

@ -0,0 +1,2 @@
NAME TYPE SIZE USED PRIO UUID LABEL
/swap.img file 2G 2M -2 0918d27e-3907-471d-abb8-45fa49ae059c

View File

@ -0,0 +1 @@
[{"name":"/swapfile","type":"file","size":1073741824,"used":524288,"priority":-2}]

View File

@ -0,0 +1,2 @@
NAME TYPE SIZE USED PRIO UUID LABEL
/swapfile file 1024M 512K -2

1
tests/fixtures/generic/tune2fs-l.json vendored Normal file
View File

@ -0,0 +1 @@
{"version":"1.46.2 (28-Feb-2021)","filesystem_volume_name":"<none>","last_mounted_on":"/home","filesystem_uuid":"5fb78e1a-b214-44e2-a309-8e35116d8dd6","filesystem_magic_number":"0xEF53","filesystem_revision_number":"1 (dynamic)","filesystem_features":["has_journal","ext_attr","resize_inode","dir_index","filetype","needs_recovery","extent","64bit","flex_bg","sparse_super","large_file","huge_file","dir_nlink","extra_isize","metadata_csum"],"filesystem_flags":"signed_directory_hash","default_mount_options":"user_xattr acl","filesystem_state":"clean","errors_behavior":"Continue","filesystem_os_type":"Linux","inode_count":3932160,"block_count":15728640,"reserved_block_count":786432,"free_blocks":15198453,"free_inodes":3864620,"first_block":0,"block_size":4096,"fragment_size":4096,"group_descriptor_size":64,"reserved_gdt_blocks":1024,"blocks_per_group":32768,"fragments_per_group":32768,"inodes_per_group":8192,"inode_blocks_per_group":512,"flex_block_group_size":16,"filesystem_created":"Mon Apr 6 15:10:37 2020","last_mount_time":"Mon Sep 19 15:16:20 2022","last_write_time":"Mon Sep 19 15:16:20 2022","mount_count":14,"maximum_mount_count":-1,"last_checked":"Fri Apr 8 15:24:22 2022","check_interval":"0 (<none>)","lifetime_writes":"203 GB","reserved_blocks_uid":"0 (user root)","reserved_blocks_gid":"0 (group root)","first_inode":11,"inode_size":256,"required_extra_isize":32,"desired_extra_isize":32,"journal_inode":8,"default_directory_hash":"half_md4","directory_hash_seed":"67d5358d-723d-4ce3-b3c0-30ddb433ad9e","journal_backup":"inode blocks","checksum_type":"crc32c","checksum":"0x7809afff","filesystem_created_epoch":1586211037,"filesystem_created_epoch_utc":null,"last_mount_time_epoch":1663625780,"last_mount_time_epoch_utc":null,"last_write_time_epoch":1663625780,"last_write_time_epoch_utc":null,"last_checked_epoch":1649456662,"last_checked_epoch_utc":null}

48
tests/fixtures/generic/tune2fs-l.out vendored Normal file
View File

@ -0,0 +1,48 @@
tune2fs 1.46.2 (28-Feb-2021)
Filesystem volume name: <none>
Last mounted on: /home
Filesystem UUID: 5fb78e1a-b214-44e2-a309-8e35116d8dd6
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 3932160
Block count: 15728640
Reserved block count: 786432
Free blocks: 15198453
Free inodes: 3864620
First block: 0
Block size: 4096
Fragment size: 4096
Group descriptor size: 64
Reserved GDT blocks: 1024
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Mon Apr 6 15:10:37 2020
Last mount time: Mon Sep 19 15:16:20 2022
Last write time: Mon Sep 19 15:16:20 2022
Mount count: 14
Maximum mount count: -1
Last checked: Fri Apr 8 15:24:22 2022
Check interval: 0 (<none>)
Lifetime writes: 203 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 32
Desired extra isize: 32
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: 67d5358d-723d-4ce3-b3c0-30ddb433ad9e
Journal backup: inode blocks
Checksum type: crc32c
Checksum: 0x7809afff

View File

@ -0,0 +1 @@
[{"runnable_procs":0,"uninterruptible_sleeping_procs":0,"virtual_mem_used":0,"free_mem":2430264,"buffer_mem":1011084,"cache_mem":22658240,"inactive_mem":null,"active_mem":null,"swap_in":0,"swap_out":0,"blocks_in":4,"blocks_out":6,"interrupts":3,"context_switches":5,"user_time":3,"system_time":1,"idle_time":96,"io_wait_time":0,"stolen_time":0,"timestamp":null,"timezone":null}]

View File

@ -0,0 +1 @@
[{"runnable_procs":0,"uninterruptible_sleeping_procs":0,"virtual_mem_used":0,"free_mem":2430264,"buffer_mem":1011084,"cache_mem":22658240,"inactive_mem":null,"active_mem":null,"swap_in":0,"swap_out":0,"blocks_in":4,"blocks_out":6,"interrupts":3,"context_switches":5,"user_time":3,"system_time":1,"idle_time":96,"io_wait_time":0,"stolen_time":0,"timestamp":null,"timezone":null}]

View File

@ -0,0 +1,4 @@
--procs-- -----------------------memory---------------------- ---swap-- -----io---- -system-- --------cpu--------
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 0 2430264 1011084 22658240 0 0 4 6 3 5 3 1 96 0 0

View File

@ -0,0 +1,33 @@
[
{
"device": "enp0s3",
"ip_address": "10.10.15.129",
"mac_address": "08:00:27:c0:4a:4f",
"total_send_rate": {
"last_2s": 4820,
"last_10s": 4820,
"last_40s": 4820
},
"total_receive_rate": {
"last_2s": 16600,
"last_10s": 16600,
"last_40s": 16600
},
"total_send_and_receive_rate": {
"last_2s": 21400,
"last_10s": 21400,
"last_40s": 21400
},
"peak_rate": {
"last_2s": 4820,
"last_10s": 16600,
"last_40s": 21400
},
"cumulative_rate": {
"last_2s": 9630,
"last_10s": 33100,
"last_40s": 42800
},
"clients": []
}
]

View File

@ -0,0 +1,18 @@
interface: enp0s3
IP address is: 10.10.15.129
MAC address is: 08:00:27:c0:4a:4f
Listening on enp0s3
# Host name (port/service if enabled) last 2s last 10s last 40s cumulative
--------------------------------------------------------------------------------------------
1 ubuntu-2004-clean-01 => 4.82KB 4.82KB 4.82KB 9.63KB
10.10.15.72 <= 14.5KB 14.5KB 14.5KB 29.1KB
2 ubuntu-2004-clean-02 => 0B 0B 0B 0B
10.10.15.72 <= 2.02KB 2.02KB 2.02KB 4.04KB
--------------------------------------------------------------------------------------------
Total send rate: 4.82KB 4.82KB 4.82KB
Total receive rate: 16.6KB 16.6KB 16.6KB
Total send and receive rate: 21.4KB 21.4KB 21.4KB
--------------------------------------------------------------------------------------------
Peak rate (sent/received/total): 4.82KB 16.6KB 21.4KB
Cumulative (sent/received/total): 9.63KB 33.1KB 42.8KB
============================================================================================

View File

@ -0,0 +1,57 @@
[
{
"device": "enp0s3",
"ip_address": "10.10.15.129",
"mac_address": "08:00:27:c0:4a:4f",
"clients": [
{
"index": 1,
"connections": [
{
"host_name": "ubuntu-2004-clean-01",
"host_port": "ssh",
"last_2s": 448,
"last_10s": 448,
"last_40s": 448,
"cumulative": 112,
"direction": "send"
},
{
"host_name": "10.10.15.72",
"host_port": "40876",
"last_2s": 208,
"last_10s": 208,
"last_40s": 208,
"cumulative": 52,
"direction": "receive"
}
]
}
],
"total_send_rate": {
"last_2s": 448,
"last_10s": 448,
"last_40s": 448
},
"total_receive_rate": {
"last_2s": 208,
"last_10s": 208,
"last_40s": 208
},
"total_send_and_receive_rate": {
"last_2s": 656,
"last_10s": 656,
"last_40s": 656
},
"peak_rate": {
"last_2s": 448,
"last_10s": 208,
"last_40s": 656
},
"cumulative_rate": {
"last_2s": 112,
"last_10s": 52,
"last_40s": 164
}
}
]

View File

@ -0,0 +1,16 @@
interface: enp0s3
IP address is: 10.10.15.129
MAC address is: 08:00:27:c0:4a:4f
Listening on enp0s3
# Host name (port/service if enabled) last 2s last 10s last 40s cumulative
--------------------------------------------------------------------------------------------
1 ubuntu-2004-clean-01:ssh => 448b 448b 448b 112B
10.10.15.72:40876 <= 208b 208b 208b 52B
--------------------------------------------------------------------------------------------
Total send rate: 448b 448b 448b
Total receive rate: 208b 208b 208b
Total send and receive rate: 656b 656b 656b
--------------------------------------------------------------------------------------------
Peak rate (sent/received/total): 448b 208b 656b
Cumulative (sent/received/total): 112B 52B 164B
============================================================================================

View File

@ -0,0 +1,236 @@
[
{
"device": "enp0s3",
"ip_address": "10.10.15.129",
"mac_address": "08:00:27:c0:4a:4f",
"total_send_rate": {
"last_2s": 23200000,
"last_10s": 23200000,
"last_40s": 23200000
},
"total_receive_rate": {
"last_2s": 5650000,
"last_10s": 5650000,
"last_40s": 5650000
},
"total_send_and_receive_rate": {
"last_2s": 28800000,
"last_10s": 28800000,
"last_40s": 28800000
},
"peak_rate": {
"last_2s": 23200000,
"last_10s": 5650000,
"last_40s": 28800000
},
"cumulative_rate": {
"last_2s": 5790000,
"last_10s": 1410000,
"last_40s": 7200000
},
"clients": [
{
"index": 1,
"connections": [
{
"host_name": "ubuntu-2004-clean-01",
"host_port": "33222",
"last_2s": 4720,
"last_10s": 4720,
"last_40s": 4720,
"cumulative": 1180,
"direction": "send"
},
{
"host_name": "10.10.15.72",
"host_port": "https",
"last_2s": 1990000,
"last_10s": 1990000,
"last_40s": 1990000,
"cumulative": 508000,
"direction": "receive"
}
]
},
{
"index": 2,
"connections": [
{
"host_name": "ubuntu-2004-clean-01",
"host_port": "https",
"last_2s": 1980000,
"last_10s": 1980000,
"last_40s": 1980000,
"cumulative": 507000,
"direction": "send"
},
{
"host_name": "10.10.15.73",
"host_port": "34562",
"last_2s": 3170,
"last_10s": 3170,
"last_40s": 3170,
"cumulative": 811,
"direction": "receive"
}
]
}
]
},
{
"device": "enp0s3",
"ip_address": "10.10.15.129",
"mac_address": "08:00:27:c0:4a:4f",
"total_send_rate": {
"last_2s": 23200000,
"last_10s": 23200000,
"last_40s": 23200000
},
"total_receive_rate": {
"last_2s": 5650000,
"last_10s": 5650000,
"last_40s": 5650000
},
"total_send_and_receive_rate": {
"last_2s": 28800000,
"last_10s": 28800000,
"last_40s": 28800000
},
"peak_rate": {
"last_2s": 23200000,
"last_10s": 5650000,
"last_40s": 28800000
},
"cumulative_rate": {
"last_2s": 5790000,
"last_10s": 1410000,
"last_40s": 7200000
},
"clients": [
{
"index": 1,
"connections": [
{
"host_name": "ubuntu-2004-clean-01",
"host_port": "33222",
"last_2s": 4720,
"last_10s": 4720,
"last_40s": 4720,
"cumulative": 1180,
"direction": "send"
},
{
"host_name": "10.10.15.72",
"host_port": "https",
"last_2s": 1990000,
"last_10s": 1990000,
"last_40s": 1990000,
"cumulative": 508000,
"direction": "receive"
}
]
},
{
"index": 2,
"connections": [
{
"host_name": "ubuntu-2004-clean-01",
"host_port": "https",
"last_2s": 1980000,
"last_10s": 1980000,
"last_40s": 1980000,
"cumulative": 507000,
"direction": "send"
},
{
"host_name": "10.10.15.73",
"host_port": "34562",
"last_2s": 3170,
"last_10s": 3170,
"last_40s": 3170,
"cumulative": 811,
"direction": "receive"
}
]
}
]
},
{
"device": "enp0s3",
"ip_address": "10.10.15.129",
"mac_address": "08:00:27:c0:4a:4f",
"total_send_rate": {
"last_2s": 23200000,
"last_10s": 23200000,
"last_40s": 23200000
},
"total_receive_rate": {
"last_2s": 5650000,
"last_10s": 5650000,
"last_40s": 5650000
},
"total_send_and_receive_rate": {
"last_2s": 28800000,
"last_10s": 28800000,
"last_40s": 28800000
},
"peak_rate": {
"last_2s": 23200000,
"last_10s": 5650000,
"last_40s": 28800000
},
"cumulative_rate": {
"last_2s": 5790000,
"last_10s": 1410000,
"last_40s": 7200000
},
"clients": [
{
"index": 1,
"connections": [
{
"host_name": "ubuntu-2004-clean-01",
"host_port": "33222",
"last_2s": 4720,
"last_10s": 4720,
"last_40s": 4720,
"cumulative": 1180,
"direction": "send"
},
{
"host_name": "10.10.15.72",
"host_port": "https",
"last_2s": 1990000,
"last_10s": 1990000,
"last_40s": 1990000,
"cumulative": 508000,
"direction": "receive"
}
]
},
{
"index": 2,
"connections": [
{
"host_name": "ubuntu-2004-clean-01",
"host_port": "https",
"last_2s": 1980000,
"last_10s": 1980000,
"last_40s": 1980000,
"cumulative": 507000,
"direction": "send"
},
{
"host_name": "10.10.15.73",
"host_port": "34562",
"last_2s": 3170,
"last_10s": 3170,
"last_40s": 3170,
"cumulative": 811,
"direction": "receive"
}
]
}
]
}
]

View File

@ -0,0 +1,48 @@
interface: enp0s3
IP address is: 10.10.15.129
MAC address is: 08:00:27:c0:4a:4f
Listening on enp0s3
# Host name (port/service if enabled) last 2s last 10s last 40s cumulative
--------------------------------------------------------------------------------------------
1 ubuntu-2004-clean-01:33222 => 4.72Kb 4.72Kb 4.72Kb 1.18KB
10.10.15.72:https <= 1.99Mb 1.99Mb 1.99Mb 508KB
2 ubuntu-2004-clean-01:https => 1.98Mb 1.98Mb 1.98Mb 507KB
10.10.15.73:34562 <= 3.17Kb 3.17Kb 3.17Kb 811B
--------------------------------------------------------------------------------------------
Total send rate: 23.2Mb 23.2Mb 23.2Mb
Total receive rate: 5.65Mb 5.65Mb 5.65Mb
Total send and receive rate: 28.8Mb 28.8Mb 28.8Mb
--------------------------------------------------------------------------------------------
Peak rate (sent/received/total): 23.2Mb 5.65Mb 28.8Mb
Cumulative (sent/received/total): 5.79MB 1.41MB 7.20MB
============================================================================================
# Host name (port/service if enabled) last 2s last 10s last 40s cumulative
--------------------------------------------------------------------------------------------
1 ubuntu-2004-clean-01:33222 => 4.72Kb 4.72Kb 4.72Kb 1.18KB
10.10.15.72:https <= 1.99Mb 1.99Mb 1.99Mb 508KB
2 ubuntu-2004-clean-01:https => 1.98Mb 1.98Mb 1.98Mb 507KB
10.10.15.73:34562 <= 3.17Kb 3.17Kb 3.17Kb 811B
--------------------------------------------------------------------------------------------
Total send rate: 23.2Mb 23.2Mb 23.2Mb
Total receive rate: 5.65Mb 5.65Mb 5.65Mb
Total send and receive rate: 28.8Mb 28.8Mb 28.8Mb
--------------------------------------------------------------------------------------------
Peak rate (sent/received/total): 23.2Mb 5.65Mb 28.8Mb
Cumulative (sent/received/total): 5.79MB 1.41MB 7.20MB
============================================================================================
# Host name (port/service if enabled) last 2s last 10s last 40s cumulative
--------------------------------------------------------------------------------------------
1 ubuntu-2004-clean-01:33222 => 4.72Kb 4.72Kb 4.72Kb 1.18KB
10.10.15.72:https <= 1.99Mb 1.99Mb 1.99Mb 508KB
2 ubuntu-2004-clean-01:https => 1.98Mb 1.98Mb 1.98Mb 507KB
10.10.15.73:34562 <= 3.17Kb 3.17Kb 3.17Kb 811B
--------------------------------------------------------------------------------------------
Total send rate: 23.2Mb 23.2Mb 23.2Mb
Total receive rate: 5.65Mb 5.65Mb 5.65Mb
Total send and receive rate: 28.8Mb 28.8Mb 28.8Mb
--------------------------------------------------------------------------------------------
Peak rate (sent/received/total): 23.2Mb 5.65Mb 28.8Mb
Cumulative (sent/received/total): 5.79MB 1.41MB 7.20MB
============================================================================================

View File

@ -0,0 +1,47 @@
import os
import unittest
import json
from typing import Dict
from jc.parsers.debconf_show import parse
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class MyTests(unittest.TestCase):
f_in: Dict = {}
f_json: Dict = {}
@classmethod
def setUpClass(cls):
fixtures = {
'debconf_show': (
'fixtures/generic/debconf-show.out',
'fixtures/generic/debconf-show.json')
}
for file, filepaths in fixtures.items():
with open(os.path.join(THIS_DIR, filepaths[0]), 'r', encoding='utf-8') as a, \
open(os.path.join(THIS_DIR, filepaths[1]), 'r', encoding='utf-8') as b:
cls.f_in[file] = a.read()
cls.f_json[file] = json.loads(b.read())
def test_debconf_show_nodata(self):
"""
Test 'debconf_show' with no data
"""
self.assertEqual(parse('', quiet=True), [])
def test_debconf_show_centos_7_7(self):
"""
Test 'debconf_show onlyoffice-documentserver'
"""
self.assertEqual(
parse(self.f_in['debconf_show'], quiet=True),
self.f_json['debconf_show']
)
if __name__ == '__main__':
unittest.main()

57
tests/test_iftop.py Normal file
View File

@ -0,0 +1,57 @@
import os
import unittest
import json
import jc.parsers.iftop
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class MyTests(unittest.TestCase):
# input
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-20.10/iftop-b-n1.out'), 'r', encoding='utf-8') as f:
ubuntu_20_10_iftop_b_n1 = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-20.10/iftop-b-n3.out'), 'r', encoding='utf-8') as f:
ubuntu_20_10_iftop_b_n3 = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-20.10/iftop-b-n1-noport.out'), 'r', encoding='utf-8') as f:
ubuntu_20_10_iftop_b_n1_noport = f.read()
# output
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-20.10/iftop-b-n1.json'), 'r', encoding='utf-8') as f:
ubuntu_20_10_iftop_b_n1_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-20.10/iftop-b-n3.json'), 'r', encoding='utf-8') as f:
ubuntu_20_10_iftop_b_n3_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-20.10/iftop-b-n1-noport.json'), 'r', encoding='utf-8') as f:
ubuntu_20_10_iftop_b_n1_noport_json = json.loads(f.read())
def test_iftop_nodata(self):
"""
Test 'iftop -b' with no data
"""
self.assertEqual(jc.parsers.iftop.parse('', quiet=True), [])
def test_iftop_ubuntu_20_10(self):
"""
Test 'iftop -i <device> -t -P -s 1' with units as MiB on Ubuntu 20.10
"""
self.assertEqual(jc.parsers.iftop.parse(self.ubuntu_20_10_iftop_b_n1, quiet=True), self.ubuntu_20_10_iftop_b_n1_json)
def test_iftop_multiple_runs_ubuntu_20_10(self):
"""
Test 'iftop -i <device> -t -P -s 1' with units as MiB on Ubuntu 20.10
"""
self.assertEqual(jc.parsers.iftop.parse(self.ubuntu_20_10_iftop_b_n3, quiet=True), self.ubuntu_20_10_iftop_b_n3_json)
def test_iftop_ubuntu_20_10_no_port(self):
"""
Test 'iftop -i <device> -t -B -s 1' with units as MiB on Ubuntu 20.10
"""
self.assertEqual(jc.parsers.iftop.parse(self.ubuntu_20_10_iftop_b_n1_noport, quiet=True), self.ubuntu_20_10_iftop_b_n1_noport_json)
if __name__ == '__main__':
unittest.main()

View File

@ -45,6 +45,9 @@ class MyTests(unittest.TestCase):
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/iptables-raw.out'), 'r', encoding='utf-8') as f: with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/iptables-raw.out'), 'r', encoding='utf-8') as f:
ubuntu_18_4_iptables_raw = f.read() ubuntu_18_4_iptables_raw = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/generic/iptables-no-jump.out'), 'r', encoding='utf-8') as f:
generic_iptables_no_jump = f.read()
# output # output
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/iptables-filter.json'), 'r', encoding='utf-8') as f: with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/iptables-filter.json'), 'r', encoding='utf-8') as f:
centos_7_7_iptables_filter_json = json.loads(f.read()) centos_7_7_iptables_filter_json = json.loads(f.read())
@ -82,6 +85,9 @@ class MyTests(unittest.TestCase):
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/iptables-raw.json'), 'r', encoding='utf-8') as f: with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/iptables-raw.json'), 'r', encoding='utf-8') as f:
ubuntu_18_4_iptables_raw_json = json.loads(f.read()) ubuntu_18_4_iptables_raw_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/generic/iptables-no-jump.json'), 'r', encoding='utf-8') as f:
generic_iptables_no_jump_json = json.loads(f.read())
def test_iptables_nodata(self): def test_iptables_nodata(self):
""" """
@ -161,6 +167,12 @@ class MyTests(unittest.TestCase):
""" """
self.assertEqual(jc.parsers.iptables.parse(self.ubuntu_18_4_iptables_raw, quiet=True), self.ubuntu_18_4_iptables_raw_json) self.assertEqual(jc.parsers.iptables.parse(self.ubuntu_18_4_iptables_raw, quiet=True), self.ubuntu_18_4_iptables_raw_json)
def test_iptables_no_jump_generic(self):
"""
Test 'sudo iptables' with no jump target
"""
self.assertEqual(jc.parsers.iptables.parse(self.generic_iptables_no_jump, quiet=True), self.generic_iptables_no_jump_json)
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -1,60 +0,0 @@
import unittest
import json
import jc.parsers.iso_datetime
class MyTests(unittest.TestCase):
def test_iso_datetime_nodata(self):
"""
Test 'iso_datetime' with no data
"""
self.assertEqual(jc.parsers.iso_datetime.parse('', quiet=True), {})
def test_iso_datetime_z(self):
"""
Test ISO datetime string with Z timezone
"""
data = r'2007-04-05T14:30Z'
expected = json.loads(r'''{"year":2007,"month":"Apr","month_num":4,"day":5,"weekday":"Thu","weekday_num":4,"hour":2,"hour_24":14,"minute":30,"second":0,"microsecond":0,"period":"PM","utc_offset":"+0000","day_of_year":95,"week_of_year":14,"iso":"2007-04-05T14:30:00+00:00","timestamp":1175783400}''')
self.assertEqual(jc.parsers.iso_datetime.parse(data, quiet=True), expected)
def test_iso_datetime_microseconds(self):
"""
Test ISO datetime string with Z timezone
"""
data = r'2007-04-05T14:30.555Z'
expected = json.loads(r'''{"year":2007,"month":"Apr","month_num":4,"day":5,"weekday":"Thu","weekday_num":4,"hour":2,"hour_24":14,"minute":0,"second":30,"microsecond":555000,"period":"PM","utc_offset":"+0000","day_of_year":95,"week_of_year":14,"iso":"2007-04-05T14:00:30.555000+00:00","timestamp":1175781630}''')
self.assertEqual(jc.parsers.iso_datetime.parse(data, quiet=True), expected)
def test_iso_datetime_plus_offset(self):
"""
Test ISO datetime string with + offset
"""
data = r'2007-04-05T14:30+03:30'
expected = json.loads(r'''{"year":2007,"month":"Apr","month_num":4,"day":5,"weekday":"Thu","weekday_num":4,"hour":2,"hour_24":14,"minute":30,"second":0,"microsecond":0,"period":"PM","utc_offset":"+0330","day_of_year":95,"week_of_year":14,"iso":"2007-04-05T14:30:00+03:30","timestamp":1175770800}''')
self.assertEqual(jc.parsers.iso_datetime.parse(data, quiet=True), expected)
def test_iso_datetime_negative_offset(self):
"""
Test ISO datetime string with - offset
"""
data = r'2007-04-05T14:30-03:30'
expected = json.loads(r'''{"year":2007,"month":"Apr","month_num":4,"day":5,"weekday":"Thu","weekday_num":4,"hour":2,"hour_24":14,"minute":30,"second":0,"microsecond":0,"period":"PM","utc_offset":"-0330","day_of_year":95,"week_of_year":14,"iso":"2007-04-05T14:30:00-03:30","timestamp":1175796000}''')
self.assertEqual(jc.parsers.iso_datetime.parse(data, quiet=True), expected)
def test_iso_datetime_nocolon_offset(self):
"""
Test ISO datetime string with no colon offset
"""
data = r'2007-04-05T14:30+0300'
expected = json.loads(r'''{"year":2007,"month":"Apr","month_num":4,"day":5,"weekday":"Thu","weekday_num":4,"hour":2,"hour_24":14,"minute":30,"second":0,"microsecond":0,"period":"PM","utc_offset":"+0300","day_of_year":95,"week_of_year":14,"iso":"2007-04-05T14:30:00+03:00","timestamp":1175772600}''')
self.assertEqual(jc.parsers.iso_datetime.parse(data, quiet=True), expected)
if __name__ == '__main__':
unittest.main()

View File

@ -24,6 +24,9 @@ class MyTests(unittest.TestCase):
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/aix-7.1/mount.out'), 'r', encoding='utf-8') as f: with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/aix-7.1/mount.out'), 'r', encoding='utf-8') as f:
aix_7_1_mount = f.read() aix_7_1_mount = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/generic/mount-spaces-in-mountpoint.out'), 'r', encoding='utf-8') as f:
generic_mount_spaces_in_mountpoint = f.read()
# output # output
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/mount.json'), 'r', encoding='utf-8') as f: with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/mount.json'), 'r', encoding='utf-8') as f:
centos_7_7_mount_json = json.loads(f.read()) centos_7_7_mount_json = json.loads(f.read())
@ -40,6 +43,9 @@ class MyTests(unittest.TestCase):
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/aix-7.1/mount.json'), 'r', encoding='utf-8') as f: with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/aix-7.1/mount.json'), 'r', encoding='utf-8') as f:
aix_7_1_mount_json = json.loads(f.read()) aix_7_1_mount_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/generic/mount-spaces-in-mountpoint.json'), 'r', encoding='utf-8') as f:
generic_mount_spaces_in_mountpoint_json = json.loads(f.read())
def test_mount_nodata(self): def test_mount_nodata(self):
""" """
@ -77,6 +83,12 @@ class MyTests(unittest.TestCase):
""" """
self.assertEqual(jc.parsers.mount.parse(self.aix_7_1_mount, quiet=True), self.aix_7_1_mount_json) self.assertEqual(jc.parsers.mount.parse(self.aix_7_1_mount, quiet=True), self.aix_7_1_mount_json)
def test_mount_spaces_in_mountpoint(self):
"""
Test 'mount' with spaces in the mountpoint
"""
self.assertEqual(jc.parsers.mount.parse(self.generic_mount_spaces_in_mountpoint, quiet=True), self.generic_mount_spaces_in_mountpoint_json)
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -0,0 +1,60 @@
import os
import unittest
import json
from typing import Dict
from jc.parsers.pkg_index_apk import parse
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class Apkindex(unittest.TestCase):
f_in: Dict = {}
f_json: Dict = {}
@classmethod
def setUpClass(cls):
fixtures = {
'normal': (
'fixtures/generic/pkg-index-apk.out',
'fixtures/generic/pkg-index-apk.json'),
'raw': (
'fixtures/generic/pkg-index-apk.out',
'fixtures/generic/pkg-index-apk-raw.json')
}
for file, filepaths in fixtures.items():
with open(os.path.join(THIS_DIR, filepaths[0]), 'r', encoding='utf-8') as a, \
open(os.path.join(THIS_DIR, filepaths[1]), 'r', encoding='utf-8') as b:
cls.f_in[file] = a.read()
cls.f_json[file] = json.loads(b.read())
def test_pkg_index_apk_nodata(self):
"""
Test 'pkg-index-apk' with no data
"""
self.assertEqual(parse('', quiet=True), [])
def test_pkg_index_apk(self):
"""
Test 'pkg-index-apk' normal output
"""
self.assertEqual(
parse(self.f_in['normal'], quiet=True),
self.f_json['normal']
)
def test_pkg_index_apk_raw(self):
"""
Test 'pkg-index-apk' raw output
"""
self.assertEqual(
parse(self.f_in['raw'], quiet=True, raw=True),
self.f_json['raw']
)
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,47 @@
import os
import unittest
import json
from typing import Dict
from jc.parsers.pkg_index_deb import parse
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class MyTests(unittest.TestCase):
f_in: Dict = {}
f_json: Dict = {}
@classmethod
def setUpClass(cls):
fixtures = {
'deb_packages_index': (
'fixtures/generic/pkg-index-deb.out',
'fixtures/generic/pkg-index-deb.json')
}
for file, filepaths in fixtures.items():
with open(os.path.join(THIS_DIR, filepaths[0]), 'r', encoding='utf-8') as a, \
open(os.path.join(THIS_DIR, filepaths[1]), 'r', encoding='utf-8') as b:
cls.f_in[file] = a.read()
cls.f_json[file] = json.loads(b.read())
def test_pkg_index_deb_nodata(self):
"""
Test 'pkg-index-deb' with no data
"""
self.assertEqual(parse('', quiet=True), [])
def test_pkg_index_deb(self):
"""
Test 'pkg-index-deb'
"""
self.assertEqual(
parse(self.f_in['deb_packages_index'], quiet=True),
self.f_json['deb_packages_index']
)
if __name__ == '__main__':
unittest.main()

View File

@ -0,0 +1,46 @@
import os
import unittest
from jc.parsers.proc_cmdline import parse
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class MyTests(unittest.TestCase):
def test_proc_cmdline_nodata(self):
"""
Test 'proc_cmdline' with no data
"""
self.assertEqual(parse('', quiet=True), {})
def test_proc_cmdline_samples(self):
"""
Test 'proc_cmdline' with various samples
"""
test_map = {
'BOOT_IMAGE=/vmlinuz-5.4.0-166-generic root=/dev/mapper/ubuntu--vg-ubuntu--lv ro debian-installer/language=ru keyboard-configuration/layoutcode?=ru':
{"BOOT_IMAGE":"/vmlinuz-5.4.0-166-generic","root":"/dev/mapper/ubuntu--vg-ubuntu--lv","debian-installer/language":"ru","keyboard-configuration/layoutcode?":"ru","_options":["ro"]},
'BOOT_IMAGE=/boot/vmlinuz-4.4.0-210-generic root=UUID=e1d708ba-4448-4e96-baed-94b277eaa128 ro net.ifnames=0 biosdevname=0':
{"BOOT_IMAGE":"/boot/vmlinuz-4.4.0-210-generic","root":"UUID=e1d708ba-4448-4e96-baed-94b277eaa128","net.ifnames":"0","biosdevname":"0","_options":["ro"]},
'BOOT_IMAGE=/boot/vmlinuz-3.13.0-102-generic root=UUID=55707609-d20a-45f2-9130-60525bebf01f ro':
{"BOOT_IMAGE":"/boot/vmlinuz-3.13.0-102-generic","root":"UUID=55707609-d20a-45f2-9130-60525bebf01f","_options":["ro"]},
'BOOT_IMAGE=/vmlinuz-5.4.0-135-generic root=/dev/mapper/vg0-lv--root ro maybe-ubiquity':
{"BOOT_IMAGE":"/vmlinuz-5.4.0-135-generic","root":"/dev/mapper/vg0-lv--root","_options":["ro","maybe-ubiquity"]},
'BOOT_IMAGE=/vmlinuz-5.4.0-107-generic root=UUID=1b83a367-43a0-4e18-8ae3-3aaa37a89c7d ro quiet nomodeset splash net.ifnames=0 vt.handoff=7':
{"BOOT_IMAGE":"/vmlinuz-5.4.0-107-generic","root":"UUID=1b83a367-43a0-4e18-8ae3-3aaa37a89c7d","net.ifnames":"0","vt.handoff":"7","_options":["ro","quiet","nomodeset","splash"]},
'BOOT_IMAGE=clonezilla/live/vmlinuz consoleblank=0 keyboard-options=grp:ctrl_shift_toggle,lctrl_shift_toggle ethdevice-timeout=130 toram=filesystem.squashfs boot=live config noswap nolocales edd=on ocs_daemonon="ssh lighttpd" nomodeset noprompt ocs_live_run="sudo screen /usr/sbin/ocs-sr -g auto -e1 auto -e2 -batch -r -j2 -k -scr -p true restoreparts win7-64 sda1" ocs_live_extra_param="" keyboard-layouts=us,ru ocs_live_batch="no" locales=ru_RU.UTF-8 vga=788 nosplash net.ifnames=0 nodmraid components union=overlay fetch=http://172.16.11.8/tftpboot/clonezilla/live/filesystem.squashfs ocs_postrun99="sudo reboot" initrd=clonezilla/live/initrd.img':
{"BOOT_IMAGE":"clonezilla/live/vmlinuz","consoleblank":"0","keyboard-options":"grp:ctrl_shift_toggle,lctrl_shift_toggle","ethdevice-timeout":"130","toram":"filesystem.squashfs","boot":"live","edd":"on","ocs_daemonon":"ssh lighttpd","ocs_live_run":"sudo screen /usr/sbin/ocs-sr -g auto -e1 auto -e2 -batch -r -j2 -k -scr -p true restoreparts win7-64 sda1","ocs_live_extra_param":"","keyboard-layouts":"us,ru","ocs_live_batch":"no","locales":"ru_RU.UTF-8","vga":"788","net.ifnames":"0","union":"overlay","fetch":"http://172.16.11.8/tftpboot/clonezilla/live/filesystem.squashfs","ocs_postrun99":"sudo reboot","initrd":"clonezilla/live/initrd.img","_options":["config","noswap","nolocales","nomodeset","noprompt","nosplash","nodmraid","components"]}
}
for data_in, expected in test_map.items():
self.assertEqual(parse(data_in, quiet=True), expected)
if __name__ == '__main__':
unittest.main()

48
tests/test_swapon.py Normal file
View File

@ -0,0 +1,48 @@
import os
import unittest
import json
from typing import Dict
from jc.parsers.swapon import parse
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class Swapon(unittest.TestCase):
f_in: Dict = {}
f_json: Dict = {}
@classmethod
def setUpClass(cls):
fixtures = {
"swapon_all": ("fixtures/generic/swapon-all-v1.out", "fixtures/generic/swapon-all-v1.json"),
"swapon_all_v2": ("fixtures/generic/swapon-all-v2.out", "fixtures/generic/swapon-all-v2.json"),
}
for file, filepaths in fixtures.items():
with open(os.path.join(THIS_DIR, filepaths[0]), "r", encoding="utf-8") as a, open(
os.path.join(THIS_DIR, filepaths[1]), "r", encoding="utf-8"
) as b:
cls.f_in[file] = a.read()
cls.f_json[file] = json.loads(b.read())
def test_swapon_nodata(self):
"""
Test 'swapon' with no data
"""
self.assertEqual(parse('', quiet=True), [])
def test_swapon_all(self):
"""
Test 'swapon --output-all'
"""
self.assertEqual(parse(self.f_in["swapon_all"], quiet=True), self.f_json["swapon_all"])
def test_swapon_all_v2(self):
"""
Test 'swapon --output-all'
"""
self.assertEqual(parse(self.f_in["swapon_all_v2"], quiet=True), self.f_json["swapon_all_v2"])
if __name__ == "__main__":
unittest.main()

47
tests/test_tune2fs.py Normal file
View File

@ -0,0 +1,47 @@
import os
import unittest
import json
from typing import Dict
from jc.parsers.tune2fs import parse
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class MyTests(unittest.TestCase):
f_in: Dict = {}
f_json: Dict = {}
@classmethod
def setUpClass(cls):
fixtures = {
'tune2fs': (
'fixtures/generic/tune2fs-l.out',
'fixtures/generic/tune2fs-l.json')
}
for file, filepaths in fixtures.items():
with open(os.path.join(THIS_DIR, filepaths[0]), 'r', encoding='utf-8') as a, \
open(os.path.join(THIS_DIR, filepaths[1]), 'r', encoding='utf-8') as b:
cls.f_in[file] = a.read()
cls.f_json[file] = json.loads(b.read())
def test_tune2fs_nodata(self):
"""
Test 'tune2fs' with no data
"""
self.assertEqual(parse('', quiet=True), {})
def test_tune2fs_l(self):
"""
Test 'tune2fs -l'
"""
self.assertEqual(
parse(self.f_in['tune2fs'], quiet=True),
self.f_json['tune2fs']
)
if __name__ == '__main__':
unittest.main()

View File

@ -40,6 +40,9 @@ class MyTests(unittest.TestCase):
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/vmstat-1-long.out'), 'r', encoding='utf-8') as f: with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/vmstat-1-long.out'), 'r', encoding='utf-8') as f:
ubuntu_18_04_vmstat_1_long = f.read() ubuntu_18_04_vmstat_1_long = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/generic/vmstat-extra-wide.out'), 'r', encoding='utf-8') as f:
generic_vmstat_extra_wide = f.read()
# output # output
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/vmstat.json'), 'r', encoding='utf-8') as f: with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/vmstat.json'), 'r', encoding='utf-8') as f:
centos_7_7_vmstat_json = json.loads(f.read()) centos_7_7_vmstat_json = json.loads(f.read())
@ -65,6 +68,9 @@ class MyTests(unittest.TestCase):
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/vmstat-1-long.json'), 'r', encoding='utf-8') as f: with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/vmstat-1-long.json'), 'r', encoding='utf-8') as f:
ubuntu_18_04_vmstat_1_long_json = json.loads(f.read()) ubuntu_18_04_vmstat_1_long_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/generic/vmstat-extra-wide.json'), 'r', encoding='utf-8') as f:
generic_vmstat_extra_wide_json = json.loads(f.read())
def test_vmstat_nodata(self): def test_vmstat_nodata(self):
""" """
@ -120,6 +126,12 @@ class MyTests(unittest.TestCase):
""" """
self.assertEqual(jc.parsers.vmstat.parse(self.ubuntu_18_04_vmstat_1_long, quiet=True), self.ubuntu_18_04_vmstat_1_long_json) self.assertEqual(jc.parsers.vmstat.parse(self.ubuntu_18_04_vmstat_1_long, quiet=True), self.ubuntu_18_04_vmstat_1_long_json)
def test_vmstat_extra_wide(self):
"""
Test 'vmstat -w' with wider output
"""
self.assertEqual(jc.parsers.vmstat.parse(self.generic_vmstat_extra_wide, quiet=True), self.generic_vmstat_extra_wide_json)
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -45,6 +45,9 @@ class MyTests(unittest.TestCase):
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/vmstat-1-long.out'), 'r', encoding='utf-8') as f: with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/vmstat-1-long.out'), 'r', encoding='utf-8') as f:
ubuntu_18_04_vmstat_1_long = f.read() ubuntu_18_04_vmstat_1_long = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/generic/vmstat-extra-wide.out'), 'r', encoding='utf-8') as f:
generic_vmstat_extra_wide = f.read()
# output # output
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/vmstat-streaming.json'), 'r', encoding='utf-8') as f: with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/vmstat-streaming.json'), 'r', encoding='utf-8') as f:
centos_7_7_vmstat_streaming_json = json.loads(f.read()) centos_7_7_vmstat_streaming_json = json.loads(f.read())
@ -70,6 +73,9 @@ class MyTests(unittest.TestCase):
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/vmstat-1-long-streaming.json'), 'r', encoding='utf-8') as f: with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/vmstat-1-long-streaming.json'), 'r', encoding='utf-8') as f:
ubuntu_18_04_vmstat_1_long_streaming_json = json.loads(f.read()) ubuntu_18_04_vmstat_1_long_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/generic/vmstat-extra-wide-streaming.json'), 'r', encoding='utf-8') as f:
generic_vmstat_extra_wide_streaming_json = json.loads(f.read())
def test_vmstat_s_nodata(self): def test_vmstat_s_nodata(self):
""" """
@ -131,6 +137,12 @@ class MyTests(unittest.TestCase):
""" """
self.assertEqual(list(jc.parsers.vmstat_s.parse(self.ubuntu_18_04_vmstat_1_long.splitlines(), quiet=True)), self.ubuntu_18_04_vmstat_1_long_streaming_json) self.assertEqual(list(jc.parsers.vmstat_s.parse(self.ubuntu_18_04_vmstat_1_long.splitlines(), quiet=True)), self.ubuntu_18_04_vmstat_1_long_streaming_json)
def test_vmstat_extra_wide(self):
"""
Test 'vmstat -w' with extra wide output
"""
self.assertEqual(list(jc.parsers.vmstat_s.parse(self.generic_vmstat_extra_wide.splitlines(), quiet=True)), self.generic_vmstat_extra_wide_streaming_json)
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -1,4 +1,3 @@
import json
import re import re
import unittest import unittest
from typing import Optional from typing import Optional
@ -18,18 +17,20 @@ from jc.parsers.xrandr import (
Mode, Mode,
Model, Model,
Device, Device,
Screen, Screen
) )
import jc.parsers.xrandr
import pprint
class XrandrTests(unittest.TestCase): class XrandrTests(unittest.TestCase):
def setUp(self):
jc.parsers.xrandr.parse_state = {}
def test_xrandr_nodata(self): def test_xrandr_nodata(self):
""" """
Test 'xrandr' with no data Test 'xrandr' with no data
""" """
self.assertEqual(parse("", quiet=True), {"screens": []}) self.assertEqual(parse("", quiet=True), {})
def test_regexes(self): def test_regexes(self):
devices = [ devices = [
@ -287,6 +288,7 @@ class XrandrTests(unittest.TestCase):
"serial_number": "0", "serial_number": "0",
} }
jc.parsers.xrandr.parse_state = {}
actual: Optional[Model] = _parse_model(generic_edid) actual: Optional[Model] = _parse_model(generic_edid)
self.assertIsNotNone(actual) self.assertIsNotNone(actual)
@ -299,5 +301,16 @@ class XrandrTests(unittest.TestCase):
self.assertIsNone(actual) self.assertIsNone(actual)
def test_issue_490(self):
"""test for issue 490: https://github.com/kellyjonbrazil/jc/issues/490"""
data_in = '''\
Screen 0: minimum 1024 x 600, current 1024 x 600, maximum 1024 x 600
default connected 1024x600+0+0 0mm x 0mm
1024x600 0.00*
'''
expected = {"screens":[{"devices":[{"modes":[{"resolution_width":1024,"resolution_height":600,"is_high_resolution":False,"frequencies":[{"frequency":0.0,"is_current":True,"is_preferred":False}]}],"is_connected":True,"is_primary":False,"device_name":"default","rotation":"normal","reflection":"normal","resolution_width":1024,"resolution_height":600,"offset_width":0,"offset_height":0,"dimension_width":0,"dimension_height":0}],"screen_number":0,"minimum_width":1024,"minimum_height":600,"current_width":1024,"current_height":600,"maximum_width":1024,"maximum_height":600}]}
self.assertEqual(jc.parsers.xrandr.parse(data_in), expected)
if __name__ == "__main__": if __name__ == "__main__":
unittest.main() unittest.main()