diff --git a/.github/workflows/package_linter.yml b/.github/workflows/package_linter.yml index 73e99db..52f9538 100644 --- a/.github/workflows/package_linter.yml +++ b/.github/workflows/package_linter.yml @@ -24,7 +24,7 @@ jobs: - name: Install dependencies run: | python -m pip install --upgrade pip - pip install toml + pip install toml pyparsing six - name: 'Clone YunoHost apps package linter' run: | diff --git a/README.md b/README.md index 8cf224e..d019c01 100644 --- a/README.md +++ b/README.md @@ -1,85 +1,63 @@ -# Packaging an app, starting from this example - -* Copy this app before working on it, using the ['Use this template'](https://github.com/YunoHost/example_ynh/generate) button on the Github repo. -* Edit the `manifest.toml` with app specific info. -* Edit the `install`, `upgrade`, `remove`, `backup` and `restore` scripts, and any relevant conf files in `conf/`. - * Using the [script helpers documentation.](https://yunohost.org/packaging_apps_helpers) -* Edit the `change_url` and `config` scripts too, or remove them if you have no use of them -* Add a `LICENSE` file for the package. NB: this LICENSE file is not meant to necessarily be the LICENSE of the upstream app - it is only the LICENSE you want this package's code to published with ;). We recommend to use [the AGPL-3](https://www.gnu.org/licenses/agpl-3.0.txt). -* Edit `doc/DISCLAIMER*.md` -* The `README.md` files are to be automatically generated by https://github.com/YunoHost/apps/tree/master/tools/README-generator - ---- -# Example app for YunoHost +# Scrutiny for YunoHost -[![Integration level](https://dash.yunohost.org/integration/example.svg)](https://dash.yunohost.org/appci/app/example) ![Working status](https://ci-apps.yunohost.org/ci/badges/example.status.svg) ![Maintenance status](https://ci-apps.yunohost.org/ci/badges/example.maintain.svg) -[![Install Example app with YunoHost](https://install-app.yunohost.org/install-with-yunohost.svg)](https://install-app.yunohost.org/?app=example) +[![Integration level](https://dash.yunohost.org/integration/scrutiny.svg)](https://dash.yunohost.org/appci/app/scrutiny) ![Working status](https://ci-apps.yunohost.org/ci/badges/scrutiny.status.svg) ![Maintenance status](https://ci-apps.yunohost.org/ci/badges/scrutiny.maintain.svg) + +[![Install Scrutiny with YunoHost](https://install-app.yunohost.org/install-with-yunohost.svg)](https://install-app.yunohost.org/?app=scrutiny) *[Lire ce readme en français.](./README_fr.md)* -> *This package allows you to install Example app quickly and simply on a YunoHost server. +> *This package allows you to install Scrutiny quickly and simply on a YunoHost server. If you don't have YunoHost, please consult [the guide](https://yunohost.org/#/install) to learn how to install it.* ## Overview -Some long and extensive description of what the app is and does, lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. +**Scrutiny is a Hard Drive Health Dashboard & Monitoring solution, merging manufacturer provided S.M.A.R.T metrics with real-world failure rates.** + +> NOTE: Scrutiny is a Work-in-Progress and still has some rough edges. ### Features -- Ut enim ad minim veniam, quis nostrud exercitation ullamco ; -- Laboris nisi ut aliquip ex ea commodo consequat ; -- Duis aute irure dolor in reprehenderit in voluptate ; -- Velit esse cillum dolore eu fugiat nulla pariatur ; -- Excepteur sint occaecat cupidatat non proident, sunt in culpa." +Scrutiny is a simple but focused application, with a couple of core features: + +- Web UI Dashboard - focused on Critical metrics +- `smartd` integration (no re-inventing the wheel) +- Auto-detection of all connected hard-drives +- S.M.A.R.T metric tracking for historical trends +- Customized thresholds using real world failure rates +- Temperature tracking +- Provided as an all-in-one Docker image (but can be installed manually) +- Configurable Alerting/Notifications via Webhooks +- (Future) Hard Drive performance testing & tracking -**Shipped version:** 1.0~ynh1 - -**Demo:** https://demo.example.com +**Shipped version:** 0.6.0~ynh1 ## Screenshots -![Screenshot of Example app](./doc/screenshots/example.jpg) - -## Disclaimers / important information - -* Any known limitations, constrains or stuff not working, such as (but not limited to): - * requiring a full dedicated domain ? - * architectures not supported ? - * not-working single-sign on or LDAP integration ? - * the app requires an important amount of RAM / disk / .. to install or to work properly - * etc... - -* Other infos that people should be aware of, such as: - * any specific step to perform after installing (such as manually finishing the install, specific admin credentials, ...) - * how to configure / administrate the application if it ain't obvious - * upgrade process / specificities / things to be aware of ? - * security considerations ? +![Screenshot of Scrutiny](./doc/screenshots/dashboard.png) ## Documentation and resources -* Official app website: -* Official user documentation: -* Official admin documentation: -* Upstream app code repository: -* YunoHost documentation for this app: -* Report a bug: +* Official admin documentation: +* Upstream app code repository: +* YunoHost documentation for this app: +* Report a bug: ## Developer info -Please send your pull request to the [testing branch](https://github.com/YunoHost-Apps/example_ynh/tree/testing). +Please send your pull request to the [testing branch](https://github.com/YunoHost-Apps/scrutiny_ynh/tree/testing). To try the testing branch, please proceed like that. ``` bash -sudo yunohost app install https://github.com/YunoHost-Apps/example_ynh/tree/testing --debug +sudo yunohost app install https://github.com/YunoHost-Apps/scrutiny_ynh/tree/testing --debug or -sudo yunohost app upgrade example -u https://github.com/YunoHost-Apps/example_ynh/tree/testing --debug +sudo yunohost app upgrade scrutiny -u https://github.com/YunoHost-Apps/scrutiny_ynh/tree/testing --debug ``` **More info regarding app packaging:** diff --git a/README_fr.md b/README_fr.md new file mode 100644 index 0000000..581ee56 --- /dev/null +++ b/README_fr.md @@ -0,0 +1,63 @@ + + +# Scrutiny pour YunoHost + +[![Niveau d’intégration](https://dash.yunohost.org/integration/scrutiny.svg)](https://dash.yunohost.org/appci/app/scrutiny) ![Statut du fonctionnement](https://ci-apps.yunohost.org/ci/badges/scrutiny.status.svg) ![Statut de maintenance](https://ci-apps.yunohost.org/ci/badges/scrutiny.maintain.svg) + +[![Installer Scrutiny avec YunoHost](https://install-app.yunohost.org/install-with-yunohost.svg)](https://install-app.yunohost.org/?app=scrutiny) + +*[Read this readme in english.](./README.md)* + +> *Ce package vous permet d’installer Scrutiny rapidement et simplement sur un serveur YunoHost. +Si vous n’avez pas YunoHost, regardez [ici](https://yunohost.org/#/install) pour savoir comment l’installer et en profiter.* + +## Vue d’ensemble + +**Scrutiny is a Hard Drive Health Dashboard & Monitoring solution, merging manufacturer provided S.M.A.R.T metrics with real-world failure rates.** + +> NOTE: Scrutiny is a Work-in-Progress and still has some rough edges. + +### Features + +Scrutiny is a simple but focused application, with a couple of core features: + +- Web UI Dashboard - focused on Critical metrics +- `smartd` integration (no re-inventing the wheel) +- Auto-detection of all connected hard-drives +- S.M.A.R.T metric tracking for historical trends +- Customized thresholds using real world failure rates +- Temperature tracking +- Provided as an all-in-one Docker image (but can be installed manually) +- Configurable Alerting/Notifications via Webhooks +- (Future) Hard Drive performance testing & tracking + + +**Version incluse :** 0.6.0~ynh1 + +## Captures d’écran + +![Capture d’écran de Scrutiny](./doc/screenshots/dashboard.png) + +## Documentations et ressources + +* Documentation officielle de l’admin : +* Dépôt de code officiel de l’app : +* Documentation YunoHost pour cette app : +* Signaler un bug : + +## Informations pour les développeurs + +Merci de faire vos pull request sur la [branche testing](https://github.com/YunoHost-Apps/scrutiny_ynh/tree/testing). + +Pour essayer la branche testing, procédez comme suit. + +``` bash +sudo yunohost app install https://github.com/YunoHost-Apps/scrutiny_ynh/tree/testing --debug +ou +sudo yunohost app upgrade scrutiny -u https://github.com/YunoHost-Apps/scrutiny_ynh/tree/testing --debug +``` + +**Plus d’infos sur le packaging d’applications :** \ No newline at end of file diff --git a/conf/config/collector.yaml b/conf/config/collector.yaml index 7f785a1..dd8f451 100644 --- a/conf/config/collector.yaml +++ b/conf/config/collector.yaml @@ -23,7 +23,6 @@ version: 1 host: id: "yunohost" - # This block allows you to override/customize the settings for devices detected by # Scrutiny via `smartctl --scan` # See the "--device=TYPE" section of https://linux.die.net/man/8/smartctl @@ -62,16 +61,13 @@ devices: # metrics_info_args: '--info --json -T permissive' # used to determine device unique ID & register device with Scrutiny # metrics_smart_args: '--xall --json -T permissive' # used to retrieve smart data for each device. - log: - file: /var/log/__APP__/__APP__-collector.log + file: /var/log/__APP__/collector.log level: INFO -# + api: # endpoint: 'https://__DOMAIN____PATH__/' - endpoint: 'http://localhost:8080__PATH__/' -# endpoint: 'http://localhost:8080' -# endpoint: 'http://localhost:8080/custombasepath' + endpoint: 'http://127.0.0.1:__PORT____PATH__/' # if you need to use a custom base path (for a reverse proxy), you can add a suffix to the endpoint. # See docs/TROUBLESHOOTING_REVERSE_PROXY.md for more info, @@ -97,4 +93,3 @@ api: # short: # enable: false # command: '' - diff --git a/conf/config/scrutiny.yaml b/conf/config/scrutiny.yaml index a1641b1..d261901 100644 --- a/conf/config/scrutiny.yaml +++ b/conf/config/scrutiny.yaml @@ -28,7 +28,7 @@ web: # see docs/TROUBLESHOOTING_REVERSE_PROXY.md # basepath: `/scrutiny` # leave empty unless behind a path prefixed proxy - basepath: '__PATH__' + basepath: '__BASE_PATH__' database: # can also set absolute path here location: __INSTALL_DIR__/config/scrutiny.db @@ -56,7 +56,7 @@ web: # insecure_skip_verify: false log: - file: /var/log/__APP__/__APP__-web-server.log + file: /var/log/__APP__/web-server.log level: INFO # Notification "urls" look like the following. For more information about service specific configuration see diff --git a/conf/systemd-scrutiny-collector.service b/conf/systemd-scrutiny-collector.service index a21bc11..d8c36b1 100644 --- a/conf/systemd-scrutiny-collector.service +++ b/conf/systemd-scrutiny-collector.service @@ -4,6 +4,7 @@ After=network-online.target scrutiny-web-server.service [Service] Type=oneshot +# Only root can fully execute smartcl features User=root Group=root WorkingDirectory=__INSTALL_DIR__ @@ -11,29 +12,42 @@ LogsDirectory=__APP__ StateDirectory=__APP__ ExecStart=__INSTALL_DIR__/bin/scrutiny-collector-metrics-linux-amd64 run --config __INSTALL_DIR__/config/collector.yaml Restart=no -StandardOutput=append:/var/log/__APP__/__APP__-collector.log +StandardOutput=append:/var/log/__APP__/collector.log StandardError=inherit -NoNewPrivileges=true -SystemCallArchitectures=native +# Sandboxing options to harden security +# Depending on specificities of your service/app, you may need to tweak these +# .. but this should be a good baseline +# Details for these options: https://www.freedesktop.org/software/systemd/man/systemd.exec.html +NoNewPrivileges=yes PrivateTmp=yes -ProtectHome=yes -#ProtectSystem=strict -ProtectKernelTunables=yes -ProtectKernelModules=yes -ProtectKernelLogs=yes -ProtectControlGroups=yes -ProtectHostname=yes -RestrictAddressFamilies=AF_INET AF_INET6 +#PrivateDevices=yes +RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6 AF_NETLINK RestrictNamespaces=yes -LockPersonality=yes -MemoryDenyWriteExecute=yes RestrictRealtime=yes -RestrictSUIDSGID=yes -RemoveIPC=yes +#DevicePolicy=closed +#ProtectClock=yes # smartctl apparently doesn't function properly with this protection in place +ProtectHostname=yes +ProtectProc=invisible +ProtectSystem=full +ProtectControlGroups=yes +ProtectKernelModules=yes +ProtectKernelTunables=yes +LockPersonality=yes +SystemCallArchitectures=native +SystemCallFilter=~@clock @debug @module @mount @obsolete @reboot @setuid @swap @cpu-emulation @privileged -# smartctl apparently doesn't function properly with this protection in place -#ProtectClock=yes +# Denying access to capabilities that should not be relevant for webapps +# Doc: https://man7.org/linux/man-pages/man7/capabilities.7.html +CapabilityBoundingSet=~CAP_RAWIO CAP_MKNOD +CapabilityBoundingSet=~CAP_AUDIT_CONTROL CAP_AUDIT_READ CAP_AUDIT_WRITE +CapabilityBoundingSet=~CAP_SYS_BOOT CAP_SYS_TIME CAP_SYS_MODULE CAP_SYS_PACCT +CapabilityBoundingSet=~CAP_LEASE CAP_LINUX_IMMUTABLE CAP_IPC_LOCK +CapabilityBoundingSet=~CAP_BLOCK_SUSPEND CAP_WAKE_ALARM +CapabilityBoundingSet=~CAP_SYS_TTY_CONFIG +CapabilityBoundingSet=~CAP_MAC_ADMIN CAP_MAC_OVERRIDE +CapabilityBoundingSet=~CAP_NET_ADMIN CAP_NET_BROADCAST CAP_NET_RAW +CapabilityBoundingSet=~CAP_SYS_ADMIN CAP_SYS_PTRACE CAP_SYSLOG [Install] WantedBy=multi-user.target diff --git a/conf/systemd-scrutiny-collector.timer b/conf/systemd-scrutiny-collector.timer index f763704..81fa84d 100644 --- a/conf/systemd-scrutiny-collector.timer +++ b/conf/systemd-scrutiny-collector.timer @@ -1,5 +1,5 @@ [Unit] -Description=Scrutiny Collector timer +Description=Scrutiny Collector Timer [Timer] OnCalendar=daily diff --git a/conf/systemd-scrutiny-web-server.service b/conf/systemd-scrutiny-web-server.service index 843480d..413ed19 100644 --- a/conf/systemd-scrutiny-web-server.service +++ b/conf/systemd-scrutiny-web-server.service @@ -1,5 +1,5 @@ [Unit] -Description=Scrutiny web server +Description=Scrutiny Web Server After=network-online.target [Service] @@ -12,28 +12,42 @@ StateDirectory=__APP__ ExecStart=__INSTALL_DIR__/bin/scrutiny-web-linux-amd64 start --config __INSTALL_DIR__/config/scrutiny.yaml Restart=always RestartSec=10s -StandardOutput=append:/var/log/__APP__/__APP__-web-server.log +StandardOutput=append:/var/log/__APP__/web-server.log StandardError=inherit +# Sandboxing options to harden security +# Depending on specificities of your service/app, you may need to tweak these +# .. but this should be a good baseline +# Details for these options: https://www.freedesktop.org/software/systemd/man/systemd.exec.html NoNewPrivileges=yes -ProtectHome=yes -#ProtectSystem=strict PrivateTmp=yes PrivateDevices=yes -ProtectKernelTunables=yes -ProtectKernelModules=yes -ProtectKernelLogs=yes -ProtectControlGroups=yes -ProtectHostname=yes +RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6 AF_NETLINK +RestrictNamespaces=yes +RestrictRealtime=yes +DevicePolicy=closed ProtectClock=yes -RestrictAddressFamilies=AF_INET AF_INET6 -RestrictNamespaces=true -LockPersonality=true -MemoryDenyWriteExecute=true -RestrictRealtime=true -RestrictSUIDSGID=true -RemoveIPC=true -CapabilityBoundingSet= +ProtectHostname=yes +ProtectProc=invisible +ProtectSystem=full +ProtectControlGroups=yes +ProtectKernelModules=yes +ProtectKernelTunables=yes +LockPersonality=yes +SystemCallArchitectures=native +SystemCallFilter=~@clock @debug @module @mount @obsolete @reboot @setuid @swap @cpu-emulation @privileged + +# Denying access to capabilities that should not be relevant for webapps +# Doc: https://man7.org/linux/man-pages/man7/capabilities.7.html +CapabilityBoundingSet=~CAP_RAWIO CAP_MKNOD +CapabilityBoundingSet=~CAP_AUDIT_CONTROL CAP_AUDIT_READ CAP_AUDIT_WRITE +CapabilityBoundingSet=~CAP_SYS_BOOT CAP_SYS_TIME CAP_SYS_MODULE CAP_SYS_PACCT +CapabilityBoundingSet=~CAP_LEASE CAP_LINUX_IMMUTABLE CAP_IPC_LOCK +CapabilityBoundingSet=~CAP_BLOCK_SUSPEND CAP_WAKE_ALARM +CapabilityBoundingSet=~CAP_SYS_TTY_CONFIG +CapabilityBoundingSet=~CAP_MAC_ADMIN CAP_MAC_OVERRIDE +CapabilityBoundingSet=~CAP_NET_ADMIN CAP_NET_BROADCAST CAP_NET_RAW +CapabilityBoundingSet=~CAP_SYS_ADMIN CAP_SYS_PTRACE CAP_SYSLOG [Install] WantedBy=multi-user.target diff --git a/doc/ADMIN.md b/doc/ADMIN.md index 34e5627..52e9274 100644 --- a/doc/ADMIN.md +++ b/doc/ADMIN.md @@ -1,4 +1,8 @@ -For any collector not on that host, you should change the `--api-endpoint`. +For any collector not on that host... + +...refer to the documentation at [https://github.com/AnalogJ/scrutiny/blob/master/docs/INSTALL_MANUAL.md#collector](https://github.com/AnalogJ/scrutiny/blob/master/docs/INSTALL_MANUAL.md#collector) + +...change the `--api-endpoint` For example : diff --git a/doc/DESCRIPTION.md b/doc/DESCRIPTION.md index e90c43a..c0e7b24 100644 --- a/doc/DESCRIPTION.md +++ b/doc/DESCRIPTION.md @@ -1,23 +1,8 @@ -WebUI for smartd S.M.A.R.T monitoring +**Scrutiny is a Hard Drive Health Dashboard & Monitoring solution, merging manufacturer provided S.M.A.R.T metrics with real-world failure rates.** > NOTE: Scrutiny is a Work-in-Progress and still has some rough edges. -# Introduction - -If you run a server with more than a couple of hard drives, you're probably already familiar with S.M.A.R.T and the `smartd` daemon. If not, it's an incredible open source project described as the following: - -> smartd is a daemon that monitors the Self-Monitoring, Analysis and Reporting Technology (SMART) system built into many ATA, IDE and SCSI-3 hard drives. The purpose of SMART is to monitor the reliability of the hard drive and predict drive failures, and to carry out different types of drive self-tests. - -Theses S.M.A.R.T hard drive self-tests can help you detect and replace failing hard drives before they cause permanent data loss. However, there's a couple issues with `smartd`: - -- There are more than a hundred S.M.A.R.T attributes, however `smartd` does not differentiate between critical and informational metrics -- `smartd` does not record S.M.A.R.T attribute history, so it can be hard to determine if an attribute is degrading slowly over time. -- S.M.A.R.T attribute thresholds are set by the manufacturer. In some cases these thresholds are unset, or are so high that they can only be used to confirm a failed drive, rather than detecting a drive about to fail. -- `smartd` is a command line only tool. For head-less servers a web UI would be more valuable. - -**Scrutiny is a Hard Drive Health Dashboard & Monitoring solution, merging manufacturer provided S.M.A.R.T metrics with real-world failure rates.** - -# Features +### Features Scrutiny is a simple but focused application, with a couple of core features: diff --git a/doc/POST_INSTALL.md b/doc/POST_INSTALL.md index 34e5627..52e9274 100644 --- a/doc/POST_INSTALL.md +++ b/doc/POST_INSTALL.md @@ -1,4 +1,8 @@ -For any collector not on that host, you should change the `--api-endpoint`. +For any collector not on that host... + +...refer to the documentation at [https://github.com/AnalogJ/scrutiny/blob/master/docs/INSTALL_MANUAL.md#collector](https://github.com/AnalogJ/scrutiny/blob/master/docs/INSTALL_MANUAL.md#collector) + +...change the `--api-endpoint` For example : diff --git a/doc/POST_UPGRADE.md b/doc/POST_UPGRADE.md index 59eb2e2..52e9274 100644 --- a/doc/POST_UPGRADE.md +++ b/doc/POST_UPGRADE.md @@ -1,6 +1,9 @@ -For any collector not on that host, you should change the `--api-endpoint`. +For any collector not on that host... + +...refer to the documentation at [https://github.com/AnalogJ/scrutiny/blob/master/docs/INSTALL_MANUAL.md#collector](https://github.com/AnalogJ/scrutiny/blob/master/docs/INSTALL_MANUAL.md#collector) + +...change the `--api-endpoint` For example : > `/opt/scrutiny/bin/scrutiny-collector-metrics-linux-amd64 run --api-endpoint https://__DOMAIN____PATH__` - diff --git a/manifest.toml b/manifest.toml index b4feeb2..2531e0a 100644 --- a/manifest.toml +++ b/manifest.toml @@ -9,8 +9,7 @@ version = "0.6.0~ynh1" maintainers = ["ewilly"] [upstream] -license = "free" -website = "https://github.com/AnalogJ/scrutiny" +license = "MIT" admindoc = "https://github.com/AnalogJ/scrutiny/tree/master/docs" code = "https://github.com/AnalogJ/scrutiny" fund = "https://paypal.me/analogj/usd10" @@ -18,7 +17,7 @@ fund = "https://paypal.me/analogj/usd10" [integration] yunohost = ">= 11.1.6" architectures = ["amd64", "arm64"] -multi_instance = true +multi_instance = false ldap = "not_relevant" sso = "not_relevant" disk = "50M" @@ -35,7 +34,7 @@ ram.runtime = "50M" [install.collector] ask.en = "Should the collector be activated on this host?" - ask.fr = "Le collector doit-il est activé sur cet host ?" + help.en = "Let it to true if yunohost is running on bare metal (i.e. not in a VM or in a LXC)" type = "boolean" default = true @@ -53,8 +52,7 @@ ram.runtime = "50M" main.show_tile = true main.protected= true main.allowed = "admins" - #api.url = "__DOMAIN____PATH__/api" # FIXME : __PATH__ in not handled by yunohost for the api in manifest.toml - api.url = "re:__DOMAIN__/.*api/.*" + api.url = "/api" api.auth_header = false api.show_tile = false api.protected= true diff --git a/scripts/change_url b/scripts/change_url index f0964a6..c5594df 100644 --- a/scripts/change_url +++ b/scripts/change_url @@ -16,7 +16,12 @@ source /usr/share/yunohost/helpers #================================================= ynh_script_progression --message="Stopping a systemd service..." --weight=1 -ynh_systemd_action --service_name=$app --action="stop" --log_path="/var/log/$app/$app.log" +ynh_systemd_action --service_name="influxdb" --action="stop" +ynh_systemd_action --service_name="scrutiny-web-server.service" --action="stop" --log_path="/var/log/$app/web-server.log" +if [ $collector -eq 1 ] +then + ynh_systemd_action --service_name="scrutiny-collector.timer" --action="stop" --log_path="/var/log/$app/collector.log" +fi #================================================= # MODIFY URL IN NGINX CONF @@ -28,9 +33,31 @@ ynh_change_url_nginx_config #================================================= # SPECIFIC MODIFICATIONS #================================================= -# ... +# in the change_url context, variables called new_domain, new_path, old_domain, old_path will be available, as well as change_domain and change_path equal to 0 (false) or 1 (true) depending if the domain / path changed #================================================= +if [ "$old_path" != "$new_path" ] +then + # Update scrutiny.yaml + if [ "${new_path}" == "/" ] + then + new_base_path="" + else + new_base_path="\\${new_path}" + fi + key="basepath" + new_value="'$new_base_path'" + sed --regexp-extended "s/^(\s*${key}:\s*).*/\1${new_value}/" --in-place "$install_dir/config/scrutiny.yaml" + ynh_store_file_checksum --file="$install_dir/config/scrutiny.yaml" + + # Update collector.yaml + port=$(ynh_app_setting_get --app=$app --key=port) + key="endpoint" + new_value="'http:\/\/127.0.0.1:${port}${new_base_path}\/'" + sed --regexp-extended "s/^(\s*${key}:\s*).*/\1${new_value}/" --in-place "$install_dir/config/collector.yaml" + ynh_store_file_checksum --file="$install_dir/config/collector.yaml" +fi + #================================================= # GENERIC FINALISATION #================================================= @@ -38,7 +65,14 @@ ynh_change_url_nginx_config #================================================= ynh_script_progression --message="Starting a systemd service..." --weight=1 -ynh_systemd_action --service_name=$app --action="start" --log_path="/var/log/$app/$app.log" +ynh_systemd_action --service_name="influxdb" --action="start" +ynh_systemd_action --service_name="scrutiny-web-server.service" --action="start" --log_path="/var/log/$app/web-server.log" +if [ $collector -eq 1 ] +then + systemctl daemon-reload + ynh_systemd_action --service_name="scrutiny-collector.service" --action="start" --log_path="/var/log/$app/collector.log" + ynh_systemd_action --service_name="scrutiny-collector.timer" --action="start" --log_path="/var/log/$app/collector.log" +fi #================================================= # END OF SCRIPT diff --git a/scripts/install b/scripts/install index 8e63f97..1b5f4e5 100755 --- a/scripts/install +++ b/scripts/install @@ -17,10 +17,12 @@ source /usr/share/yunohost/helpers ynh_script_progression --message="Setting up source files..." --weight=1 mkdir -p "$install_dir/bin" -if [ $YNH_ARCH == "amd64" ]; then +if [ $YNH_ARCH == "amd64" ] +then ynh_setup_source --source_id="src/scrutiny-web-linux-amd64" --dest_dir="$install_dir/bin" ynh_setup_source --source_id="src/scrutiny-collector-metrics-linux-amd64" --dest_dir="$install_dir/bin" -elif [ $YNH_ARCH == "arm64" ]; then +elif [ $YNH_ARCH == "arm64" ] +then ynh_setup_source --source_id="src/scrutiny-web-linux-arm64" --dest_dir="$install_dir/bin" ynh_setup_source --source_id="src/scrutiny-collector-metrics-linux-arm64" --dest_dir="$install_dir/bin" fi @@ -37,11 +39,12 @@ ynh_add_nginx_config # Create a dedicated systemd config ynh_add_systemd_config --service="scrutiny-web-server" --template="systemd-scrutiny-web-server.service" -yunohost service add "scrutiny-web-server" --description="WebUI for smartd S.M.A.R.T monitoring" --log="/var/log/$app/$app-web-server.log" +yunohost service add "scrutiny-web-server" --description="WebUI for smartd S.M.A.R.T monitoring" --log="/var/log/$app/web-server.log" ynh_add_config --template="systemd-scrutiny-collector.service" --destination="/etc/systemd/system/scrutiny-collector.service" -if [ $collector ]; then - yunohost service add "scrutiny-collector.timer" --description="Collector timer for smartd S.M.A.R.T monitoring" --log="/var/log/$app/$app-web-server.log" +if [ $collector -eq 1 ] +then + yunohost service add "scrutiny-collector" --description="Collector running on timer (daily) for smartd S.M.A.R.T monitoring" --log="/var/log/$app/collector.log" --test_status="systemctl show scrutiny-collector.service -p ActiveState --value | grep -v failed" fi # Use logrotate to manage application logfile(s) @@ -55,6 +58,12 @@ ynh_use_logrotate --specific_user="$app" ynh_script_progression --message="Adding a configuration file..." --weight=1 mkdir -p "$install_dir/config" +if [ "${path}" == "/" ] +then + base_path="" +else + base_path="${path}" +fi ynh_add_config --template="config/scrutiny.yaml" --destination="$install_dir/config/scrutiny.yaml" ynh_add_config --template="systemd-scrutiny-collector.timer" --destination="/etc/systemd/system/scrutiny-collector.timer" @@ -74,13 +83,14 @@ myynh_set_permissions ynh_script_progression --message="Starting a systemd service..." --weight=1 ynh_systemd_action --service_name="influxdb" --action="enable" ynh_systemd_action --service_name="influxdb" --action="start" -ynh_systemd_action --service_name="scrutiny-web-server.service" --action="start" --log_path="/var/log/$app/scrutiny-web-server.log" -if [ $collector ]; then +ynh_systemd_action --service_name="scrutiny-web-server.service" --action="start" --log_path="/var/log/$app/web-server.log" +if [ $collector -eq 1 ] +then systemctl daemon-reload ynh_systemd_action --service_name="scrutiny-collector.service" --action="enable" - ynh_systemd_action --service_name="scrutiny-collector.service" --action="start" --log_path="/var/log/$app/scrutiny-collector.log" + ynh_systemd_action --service_name="scrutiny-collector.service" --action="start" --log_path="/var/log/$app/collector.log" ynh_systemd_action --service_name="scrutiny-collector.timer" --action="enable" - ynh_systemd_action --service_name="scrutiny-collector.timer" --action="start" + ynh_systemd_action --service_name="scrutiny-collector.timer" --action="start" --log_path="/var/log/$app/collector.log" fi #================================================= diff --git a/scripts/remove b/scripts/remove index 8a84123..2e42b7c 100755 --- a/scripts/remove +++ b/scripts/remove @@ -25,21 +25,18 @@ fi if ynh_exec_warn_less yunohost service status "scrutiny-collector" >/dev/null then - ynh_script_progression --message="Removing scrutiny-collector timer integration..." --weight=1 + ynh_script_progression --message="Removing scrutiny-collector service integration..." --weight=1 yunohost service remove "scrutiny-collector" fi ynh_remove_systemd_config --service="scrutiny-web-server" ynh_remove_systemd_config --service="scrutiny-collector" +ynh_secure_remove --file="/etc/systemd/system/scrutiny-collector.timer" ynh_remove_nginx_config ynh_remove_logrotate -if [ $collector ]; then - ynh_secure_remove --file="/etc/systemd/system/scrutiny-collector.timer" -fi - ynh_secure_remove --file="/var/log/$app" #================================================= diff --git a/scripts/restore b/scripts/restore index fefc193..9e1dd5e 100755 --- a/scripts/restore +++ b/scripts/restore @@ -30,9 +30,10 @@ systemctl enable "/etc/systemd/system/scrutiny-web-server.service" --quiet ynh_restore_file --origin_path="/etc/systemd/system/scrutiny-collector.service" ynh_restore_file --origin_path="/etc/systemd/system/scrutiny-collector.timer" -yunohost service add "scrutiny-web-server" --description="WebUI for smartd S.M.A.R.T monitoring" --log="/var/log/$app/$app-web-server.log" -if [ $collector ]; then - yunohost service add "scrutiny-collector.timer" --description="Collector timer for smartd S.M.A.R.T monitoring" --log="/var/log/$app/$app-web-server.log" +yunohost service add "scrutiny-web-server" --description="WebUI for smartd S.M.A.R.T monitoring" --log="/var/log/$app/web-server.log" +if [ $collector -eq 1 ] +then + yunohost service add "scrutiny-collector" --description="Collector running on timer (daily) for smartd S.M.A.R.T monitoring" --log="/var/log/$app/collector.log" --test_status="systemctl show scrutiny-collector.service -p ActiveState --value | grep -v failed" fi ynh_restore_file --origin_path="/etc/logrotate.d/$app" @@ -47,13 +48,14 @@ ynh_script_progression --message="Reloading NGINX web server and $app's service. # Typically you only have either $app or php-fpm but not both at the same time... ynh_systemd_action --service_name="influxdb" --action="start" -ynh_systemd_action --service_name="scrutiny-web-server.service" --action="start" --log_path="/var/log/$app/scrutiny-web-server.log" -if [ $collector ]; then +ynh_systemd_action --service_name="scrutiny-web-server.service" --action="start" --log_path="/var/log/$app/web-server.log" +if [ $collector -eq 1 ] +then systemctl daemon-reload ynh_systemd_action --service_name="scrutiny-collector.service" --action="enable" - ynh_systemd_action --service_name="scrutiny-collector.service" --action="start" --log_path="/var/log/$app/scrutiny-collector.log" + ynh_systemd_action --service_name="scrutiny-collector.service" --action="start" --log_path="/var/log/$app/collector.log" ynh_systemd_action --service_name="scrutiny-collector.timer" --action="enable" - ynh_systemd_action --service_name="scrutiny-collector.timer" --action="start" + ynh_systemd_action --service_name="scrutiny-collector.timer" --action="start" --log_path="/var/log/$app/collector.log" fi ynh_systemd_action --service_name=nginx --action=reload diff --git a/scripts/upgrade b/scripts/upgrade index e88b1f6..da81cdb 100755 --- a/scripts/upgrade +++ b/scripts/upgrade @@ -24,11 +24,11 @@ upgrade_type=$(ynh_check_app_version_changed) #================================================= ynh_script_progression --message="Stopping a systemd service..." --weight=1 -ynh_systemd_action --service_name=$app --action="stop" --log_path="/var/log/$app/$app.log" ynh_systemd_action --service_name="influxdb" --action="stop" -ynh_systemd_action --service_name="scrutiny-web-server.service" --action="stop" --log_path="/var/log/$app/scrutiny-web-server.log" -if [ $collector ]; then - ynh_systemd_action --service_name="scrutiny-collector.timer" --action="stop" +ynh_systemd_action --service_name="scrutiny-web-server.service" --action="stop" --log_path="/var/log/$app/web-server.log" +if [ $collector -eq 1 ] +then + ynh_systemd_action --service_name="scrutiny-collector.timer" --action="stop" --log_path="/var/log/$app/collector.log" fi #================================================= @@ -42,10 +42,12 @@ then ynh_script_progression --message="Upgrading source files..." --weight=1 # Download, check integrity, uncompress and patch the source from app.src - if [ $YNH_ARCH == "amd64" ]; then + if [ $YNH_ARCH == "amd64" ] + then ynh_setup_source --source_id="src/scrutiny-web-linux-amd64" --dest_dir="$install_dir/bin" ynh_setup_source --source_id="src/scrutiny-collector-metrics-linux-amd64" --dest_dir="$install_dir/bin" - elif [ $YNH_ARCH == "arm64" ]; then + elif [ $YNH_ARCH == "arm64" ] + then ynh_setup_source --source_id="src/scrutiny-web-linux-arm64" --dest_dir="$install_dir/bin" ynh_setup_source --source_id="src/scrutiny-collector-metrics-linux-arm64" --dest_dir="$install_dir/bin" fi @@ -60,11 +62,12 @@ ynh_script_progression --message="Upgrading system configurations related to $ap ynh_add_nginx_config ynh_add_systemd_config --service="scrutiny-web-server" --template="systemd-scrutiny-web-server.service" -yunohost service add "scrutiny-web-server" --description="WebUI for smartd S.M.A.R.T monitoring" --log="/var/log/$app/$app-web-server.log" +yunohost service add "scrutiny-web-server" --description="WebUI for smartd S.M.A.R.T monitoring" --log="/var/log/$app/web-server.log" ynh_add_config --template="systemd-scrutiny-collector.service" --destination="/etc/systemd/system/scrutiny-collector.service" -if [ $collector ]; then - yunohost service add "scrutiny-collector.timer" --description="Collector timer for smartd S.M.A.R.T monitoring" --log="/var/log/$app/$app-web-server.log" +if [ $collector -eq 1 ] +then + yunohost service add "scrutiny-collector" --description="Collector running on timer (daily) for smartd S.M.A.R.T monitoring" --log="/var/log/$app/collector.log" --test_status="systemctl show scrutiny-collector.service -p ActiveState --value | grep -v failed" fi ynh_use_logrotate --specific_user="$app" --non-append @@ -76,6 +79,12 @@ ynh_use_logrotate --specific_user="$app" --non-append #================================================= ynh_script_progression --message="Updating a configuration file..." --weight=1 +if [ "${path}" == "/" ] +then + base_path="" +else + base_path="${path}" +fi ynh_add_config --template="config/scrutiny.yaml" --destination="$install_dir/config/scrutiny.yaml" ynh_add_config --template="systemd-scrutiny-collector.timer" --destination="/etc/systemd/system/scrutiny-collector.timer" @@ -93,13 +102,14 @@ myynh_set_permissions ynh_script_progression --message="Starting a systemd service..." --weight=1 ynh_systemd_action --service_name="influxdb" --action="enable" ynh_systemd_action --service_name="influxdb" --action="start" -ynh_systemd_action --service_name="scrutiny-web-server.service" --action="start" --log_path="/var/log/$app/scrutiny-web-server.log" -if [ $collector ]; then +ynh_systemd_action --service_name="scrutiny-web-server.service" --action="start" --log_path="/var/log/$app/web-server.log" +if [ $collector -eq 1 ] +then systemctl daemon-reload ynh_systemd_action --service_name="scrutiny-collector.service" --action="enable" - ynh_systemd_action --service_name="scrutiny-collector.service" --action="start" --log_path="/var/log/$app/scrutiny-collector.log" + ynh_systemd_action --service_name="scrutiny-collector.service" --action="start" --log_path="/var/log/$app/collector.log" ynh_systemd_action --service_name="scrutiny-collector.timer" --action="enable" - ynh_systemd_action --service_name="scrutiny-collector.timer" --action="start" + ynh_systemd_action --service_name="scrutiny-collector.timer" --action="start" --log_path="/var/log/$app/collector.log" fi #================================================= diff --git a/tests.toml b/tests.toml index de10639..bec3677 100644 --- a/tests.toml +++ b/tests.toml @@ -18,8 +18,4 @@ test_format = 1.0 # Commits to test upgrade from # ------------------------------- - - -[some_additional_testsuite] - - args.collector = false + diff --git a/updater.sh b/updater.sh deleted file mode 100755 index d4c6fbb..0000000 --- a/updater.sh +++ /dev/null @@ -1,153 +0,0 @@ -#!/bin/bash - -#================================================= -# PACKAGE UPDATING HELPER -#================================================= - -# This script is meant to be run by GitHub Actions -# The YunoHost-Apps organisation offers a template Action to run this script periodically -# Since each app is different, maintainers can adapt its contents so as to perform -# automatic actions when a new upstream release is detected. - -# Remove this exit command when you are ready to run this Action -#exit 1 - -#================================================= -# FETCHING LATEST RELEASE AND ITS ASSETS -#================================================= - -# Fetching information -current_version=$(cat manifest.toml | awk -v key="version" '$1 == key { gsub("\"","",$3);print $3 }' | awk -F'~' '{print $1}') -repo=$(cat manifest.toml | awk -v key="code" '$1 == key { gsub("\"","",$3);print $3 }' | awk -F'https://github.com/' '{print $2}') - -# Some jq magic is needed, because the latest upstream release is not always the latest version (e.g. security patches for older versions) -version=$(curl --silent "https://api.github.com/repos/$repo/releases" | jq -r '.[] | select( .prerelease != true ) | .tag_name' | sort -V | tail -1) -assets=($(curl --silent "https://api.github.com/repos/$repo/releases" | jq -r '[ .[] | select(.tag_name=="'$version'").assets[].browser_download_url ] | join(" ") | @sh' | tr -d "'")) - -# Later down the script, we assume the version has only digits and dots -# Sometimes the release name starts with a "v", so let's filter it out. -# You may need more tweaks here if the upstream repository has different naming conventions. -if [[ ${version:0:1} == "v" || ${version:0:1} == "V" ]]; then - version=${version:1} -fi - -# Setting up the environment variables -echo "Current version: $current_version" -echo "Latest release from upstream: $version" -echo "VERSION=$version" >> $GITHUB_ENV -echo "REPO=$repo" >> $GITHUB_ENV -# For the time being, let's assume the script will fail -echo "PROCEED=false" >> $GITHUB_ENV - -# Proceed only if the retrieved version is greater than the current one -if ! dpkg --compare-versions "$current_version" "lt" "$version" ; then - echo "::warning ::No new version available" - exit 0 -# Proceed only if a PR for this new version does not already exist -elif git ls-remote -q --exit-code --heads https://github.com/$GITHUB_REPOSITORY.git ci-auto-update-v$version ; then - echo "::warning ::A branch already exists for this update" - exit 0 -fi - -# Each release can hold multiple assets (e.g. binaries for different architectures, source code, etc.) -echo "${#assets[@]} available asset(s)" - -#================================================= -# UPDATE SOURCE FILES -#================================================= - -# Here we use the $assets variable to get the resources published in the upstream release. -# Here is an example for Grav, it has to be adapted in accordance with how the upstream releases look like. - -# Let's loop over the array of assets URLs -for asset_url in ${assets[@]}; do - -echo "Handling asset at $asset_url" - -# Assign the asset to a source file in conf/ directory -# Here we base the source file name upon a unique keyword in the assets url (admin vs. update) -# Leave $src empty to ignore the asset -case $asset_url in - *"scrutiny-collector-metrics-linux-amd64"*) - src="scrutiny-collector-metrics-linux-amd64" - ;; - *"scrutiny-collector-metrics-linux-arm64"*) - src="scrutiny-collector-metrics-linux-arm64" - ;; - *"scrutiny-web-frontend.tar.gz"*) - src="scrutiny-web-frontend.tar.gz" - ;; - *"scrutiny-web-linux-amd64"*) - src="scrutiny-web-linux-amd64" - ;; - *"scrutiny-web-linux-arm64"*) - src="scrutiny-web-linux-arm64" - ;; - *) - src="" - ;; -esac - -# If $src is not empty, let's process the asset -if [ ! -z "$src" ]; then - -# Create the temporary directory -tempdir="$(mktemp -d)" - -# Download sources and calculate checksum -filename=${asset_url##*/} -curl --silent -4 -L $asset_url -o "$tempdir/$filename" -checksum=$(sha256sum "$tempdir/$filename" | head -c 64) - -# Delete temporary directory -rm -rf $tempdir - -# Get extension -if [[ $filename == *.tar.gz ]]; then - extension=tar.gz - extract=true -elif [[ $filename == ${filename##*.} ]]; then - extension="" - extract=false -else - extension=${filename##*.} - extract=false -fi - -# Rewrite source file -cat < conf/src/$src.src -SOURCE_URL=$asset_url -SOURCE_SUM=$checksum -SOURCE_SUM_PRG=sha256sum -SOURCE_FORMAT=$extension -SOURCE_IN_SUBDIR=true -SOURCE_FILENAME=$filename -SOURCE_EXTRACT=$extract -EOT -echo "... conf/src/$src.src updated" - -else -echo "... asset ignored" -fi - -done - -#================================================= -# SPECIFIC UPDATE STEPS -#================================================= - -# Any action on the app's source code can be done. -# The GitHub Action workflow takes care of committing all changes after this script ends. - -#================================================= -# GENERIC FINALIZATION -#================================================= - -# Replace new version in manifest -sed -i "s/^version = .*/version = \"$version~ynh1\"/" manifest.toml - -# No need to update the README, yunohost-bot takes care of it - -# The Action will proceed only if the PROCEED environment variable is set to true -echo "PROCEED=true" >> $GITHUB_ENV -exit 0