This commit is contained in:
Biedermann Steve 2024-05-10 10:23:49 +02:00
parent 7c2540a185
commit ed2b803ce5
11 changed files with 193 additions and 687 deletions

1
.gitignore vendored
View File

@ -1 +1,2 @@
/target /target
/out

16
Cargo.lock generated
View File

@ -332,6 +332,15 @@ dependencies = [
"hashbrown", "hashbrown",
] ]
[[package]]
name = "markdown"
version = "1.0.0-alpha.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "21e27d6220ce21f80ce5c4201f23a37c6f1ad037c72c9d1ff215c2919605a5d6"
dependencies = [
"unicode-id",
]
[[package]] [[package]]
name = "memchr" name = "memchr"
version = "2.7.2" version = "2.7.2"
@ -469,6 +478,7 @@ dependencies = [
"clap", "clap",
"crossterm 0.27.0", "crossterm 0.27.0",
"indexmap", "indexmap",
"markdown",
"ratatui", "ratatui",
"regex", "regex",
"rsn", "rsn",
@ -745,6 +755,12 @@ dependencies = [
"unicode-width", "unicode-width",
] ]
[[package]]
name = "unicode-id"
version = "0.3.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b1b6def86329695390197b82c1e244a54a131ceb66c996f2088a3876e2ae083f"
[[package]] [[package]]
name = "unicode-ident" name = "unicode-ident"
version = "1.0.12" version = "1.0.12"

View File

@ -11,6 +11,7 @@ anyhow = "1.0.83"
clap = { version = "4.5.4", features = ["derive"] } clap = { version = "4.5.4", features = ["derive"] }
crossterm = "0.27.0" crossterm = "0.27.0"
indexmap = { version = "2.2.6", features = ["serde"] } indexmap = { version = "2.2.6", features = ["serde"] }
markdown = "1.0.0-alpha.17"
ratatui = "0.26.2" ratatui = "0.26.2"
regex = "1.10.4" regex = "1.10.4"
rsn = "0.1.0" rsn = "0.1.0"

12
generate.sh Executable file
View File

@ -0,0 +1,12 @@
#!/bin/bash
source_dir="$(dirname "${BASH_SOURCE[0]}")"
pushd "$source_dir"
mkdir -p out
cargo build
cargo run -q -- schema > out/schema.json
cargo run -q -- demo > out/demo.yml
cargo run -q -- md req.yml > out/requirements.md
cargo run -q -- html req.yml > out/requirements.html
cargo run -q -- check req.yml test_result.txt > out/text_result.md

126
orig.md
View File

@ -1,126 +0,0 @@
# Requirements for journal-uploader
[[_TOC_]]
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED",
"MAY", and "OPTIONAL" in this document are to be interpreted as described in
[RFC 2119](https://datatracker.ietf.org/doc/html/rfc2119).
## Purpose
The journal-uploader has two main functionalities.
- Take a stream of log messages and filter them depending on their severity
- Upload journal logs for a specified time when activated through cloud call
## Requirements
### 1. Traced Logging
#### 1.1 File Monitoring
- **1.1.1 Continuous Monitoring:** The tool **_MUST_** continuously monitor a designated directory.
#### 1.2 File Detection
- **1.2.1 Detection of New Files:** The tool **_MUST_** detect the addition of new files in the monitored directory.
- **1.2.2 Avoid Re-processing:** The tool **_MUST NOT_** process files that have already been processed.
#### 1.3 File Processing
- **1.3.1 Reading Log Messages:** When a new file is processed, each log message **_SHOULD_** be put into a buffer.
- **1.3.2 Filtering Log Messages:** The tool will search for messages of a defined priority (Trigger Priority).
Each message of this priority, as well as all messages before and after, which are inside a defined timespan, **_MUST_**
get written into a file. Every other message **_SHOULD_** gets dropped.
- **1.3.3 No Duplicate Log Messages:** The tool **_SHALL_** make sure that no log entry will be written to the file twice.
#### 1.4 Traced Log Rotation
- **1.4.1 Rotating Files:** When the size of the current traced log file exceeds a certain threshold,
it **_MUST_** be closed and a new file **_MUST_** be opened for writing.
- **1.4.2 Compression of Rotated Files:** Each traced log file **_MUST_** get compressed after it got rotated.
- **1.4.3 Rotating Directory:** When the directory size exceeds a certain threshold, the tool **_MUST_** delete the oldest
files in the directory, until the size is below the threshold again.
### 2. Remote Journal Logging
#### 2.1 Service Activation
- **2.1.1 Cloud Activation:** The remote journal logging **_SHALL_** be startable through a function call from the cloud.
The api call has the duration and max interval as arguments.
- **2.1.2 Duration:** The remote journal logging **_SHOULD_** stay active, until it reaches the specified duration.
- **2.1.3 Max Interval:** If no upload was done after the amount of time specified in max interval,
a log rotation **_SHALL_** be triggered, which will in turn get picked up by the file monitoring.
- **2.1.4 Analytics Not Accepted:** If the user has not accepted the usage of their data, the cloud call **_MUST_**
result in an error.
#### 2.2 File Monitoring
- **2.2.1 Continuous Monitoring:** The tool **_SHOULD_** continuously monitor a designated directory.
#### 2.3 File Detection
- **2.3.1 Detection of New Files:** The tool **_MUST_** detect the addition of new files in the monitored directory.
- **2.3.2 Avoid Re-processing:** The tool **_MUST NOT_** process files that have already been processed.
#### 2.4 File Processing
- **2.4.1 File Upload:** When a file gets detected, it **_SHOULD_** get uploaded to the cloud.
- **2.4.2 No Duplicate Files:** Already processed files **_MUST NOT_** get uploaded again.
- **2.4.3 Revoking Analytics:** If the user revokes the usage of their data, the service **_MAY_** continue running
but **_MUST NOT_** upload any data until the user allows the usage of their data again.
- **2.4.4 Duration Expired:** After the specified duration is expired, the service **_SHOULD_** stop uploading files.
### 3. Configuration
- **3.1 Configurable Journal Directory:** Users **_SHOULD_** be able to specify the directory to be monitored for
journal files.
- **3.2 Configurable Output Directory:** Users **_SHOULD_** be able to specify the directory into which the final files
will be written.
- **3.3 Configurable Trigger Priority:** Users **_SHOULD_** be able to specify which priority triggers the filtering.
- **3.4 Configurable Journal Context:** Users **_SHOULD_** be able to specify how many seconds of context will be added
to traced logs when encountering a trigger priority.
- **3.5 Configurable Max File Size:** Users **_SHOULD_** be able to specify the max file size, at which a file gets rotated.
- **3.6 Configurable Max Directory Size:** Users **_SHOULD_** be able to specify the max directory size, at which a
directory gets rotated.
- **3.7 Configurable File Monitoring Interval:** Users **_SHOULD_** be able to specify an interval, which **_SHOULD_** change
how long the tool waits before checking if new files are available.
### 4. Performance Requirements
- **4.1 Efficiency:** The tool **_SHOULD_** efficiently monitor and process files without excessive resource consumption.
- **4.2 Interval Delay:** The tool **_SHOULD_** do its work with no more than 10 seconds delay after its interval.
### 5. Data Protection
- **5.1 No Insecure Connection:** The tool **_MUST_** send data only through a secure connection.
- **5.2 GDPR compliance:** The tool **_MUST NOT_** upload data if the user has not agreed to share this information.
### 6. Testing
- **6.1 Unit Tests:** Comprehensive unit tests **_SHOULD_** be written to cover major functionalities.
- **6.2 Integration Tests:** Integration tests **_SHOULD_** be conducted to ensure all parts of the tool work together
seamlessly.
## Definitions
- Default Journal Directory: /run/log/journal/<machine_id>
- Machine ID can be found at /etc/machine-id
- Default Output Directory: /run/log/filtered-journal
## Config Defaults
- **Journal Directory**
- type: Path
- **Required**: This value **_MUST_** be provided as a start parameter.
- **Output Directory**
- type: Path
- **Required**: This value **_MUST_** be provided as a start parameter.
- **Trigger Priority**
- type: Enum
- Valid Values: _Emergency, Alert, Critical, Error, Warning, Notice, Info, Debug_
- Default Value: _Warning_
- **Journal Context**
- type: Integer
- unit: Seconds
- Default Value: _15_
- **Max File Size**
- type: Integer
- unit: Bytes
- Default Value: _8388608_ (8 MB)
- **Max Directory Size**
- type: Integer
- unit: Bytes
- Default Value: _75497472_ (72 MB)
- **File Monitoring Interval**
- type: Integer
- unit: Seconds
- Default Value: _10_

120
out.md
View File

@ -1,120 +0,0 @@
# Requirements for journal-uploader
[[_TOC_]]
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED",
"MAY", and "OPTIONAL" in this document are to be interpreted as described in
[RFC 2119](https://datatracker.ietf.org/doc/html/rfc2119).
## Description
The journal-uploader has two main functionalities.
- Take a stream of log messages and filter them depending on their severity
- Upload journal logs for a specified time when activated through cloud call
## Requirements
### _TOPIC-1_ - Journal Watcher
#### _SUB-1.1_ - File Monitoring
- **_REQ-1.1.1_ - Continuous Monitoring:** The tool **_MUST_** continuously monitor a designated directory.
#### _SUB-1.2_ - File Detection
- **_REQ-1.2.1_ - Detection of New Files:** The tool **_MUST_** detect the addition of new files in the monitored directory.
- **_REQ-1.2.2_ - Avoid Re-processing:** The tool **_MUST NOT_** process files that have already been processed.
### _TOPIC-2_ - Traced Logging
#### _SUB-2.1_ - File Processing
- **_REQ-2.1.1_ - Reading Log Messages:** When a new file is processed, each log message **_SHOULD_** be put into a buffer.
- **_REQ-2.1.2_ - Filtering Log Messages:** The tool will search for messages of a defined priority (Trigger Priority).
Each message of this priority, as well as all messages before and after, which are inside a defined timespan, **_MUST_**
get written into a file. Every other message **_SHOULD_** gets dropped.
- **_REQ-2.1.3_ - No Duplicate Log Messages:** The tool **_SHALL_** make sure that no log entry will be written to the file twice.
#### _SUB-2.2_ - Traced Log Rotation
- **_REQ-2.2.1_ - Rotating Files:** When the size of the current traced log file exceeds a certain threshold,
it **_MUST_** be closed and a new file **_MUST_** be opened for writing.
- **_REQ-2.2.2_ - Compression of Rotated Files:** Each traced log file **_MUST_** get compressed after it got rotated.
- **_REQ-2.2.3_ - Rotating Directory:** When the directory size exceeds a certain threshold, the tool **_MUST_** delete the oldest
files in the directory, until the size is below the threshold again.
### _TOPIC-3_ - Remote Journal Logging
#### _SUB-3.1_ - Service Activation
- **_REQ-3.1.1_ - Cloud Activation:** The remote journal logging **_SHALL_** be startable through a function call from the cloud.
The api call has the duration and max interval as arguments.
- **_REQ-3.1.2_ - Duration:** The remote journal logging **_SHOULD_** stay active, until it reaches the specified duration.
- **_REQ-3.1.3_ - Max Interval:** If no upload was done after the amount of time specified in max interval,
a log rotation **_SHALL_** be triggered, which will in turn get picked up by the file monitoring.
- **_REQ-3.1.4_ - Analytics Not Accepted:** If the user has not accepted the usage of their data, the cloud call **_MUST_**
result in an error.
#### _SUB-3.2_ - File Processing
- **_REQ-3.2.1_ - File Upload:** When a file gets detected, it **_SHOULD_** get uploaded to the cloud.
- **_REQ-3.2.2_ - No Duplicate Files:** Already processed files **_MUST NOT_** get uploaded again.
- **_REQ-3.2.3_ - Revoking Analytics:** If the user revokes the usage of their data, the service **_MAY_** continue running
but **_MUST NOT_** upload any data until the user allows the usage of their data again.
- **_REQ-3.2.4_ - Duration Expired:** After the specified duration is expired, the service **_SHOULD_** stop uploading files.
### _TOPIC-4_ - Configuration
- **_CONF-4.1_ - Journal Directory:** Users **_SHOULD_** be able to specify the directory to be monitored for journal files.
- **_CONF-4.2_ - Output Directory:** Users **_SHOULD_** be able to specify the directory into which the final files will be written.
- **_CONF-4.3_ - Trigger Priority:** Users **_SHOULD_** be able to specify which priority triggers the filtering.
- **_CONF-4.4_ - Journal Context:** Users **_SHOULD_** be able to specify how many seconds of context will be added to traced logs when encountering a trigger priority.
- **_CONF-4.5_ - Max File Size:** Users **_SHOULD_** be able to specify the max file size, at which a file gets rotated.
- **_CONF-4.6_ - Max Directory Size:** Users **_SHOULD_** be able to specify the max directory size, at which a directory gets rotated.
- **_CONF-4.7_ - File Monitoring Interval:** Users **_SHOULD_** be able to specify an interval, which **_SHOULD_** change
how long the tool waits before checking if new files are available.
### _TOPIC-5_ - Performance Requirements
- **_PERF-5.1_ - Efficiency:** The tool **_SHOULD_** efficiently monitor and process files without excessive resource consumption.
- **_PERF-5.2_ - Interval Delay:** The tool **_SHOULD_** do its work with no more than 10 seconds delay after its interval.
### _TOPIC-6_ - Security & Data Protection
- **_SEC-6.1_ - No Insecure Connection:** The tool **_MUST_** send data only through a secure connection.
- **_SEC-6.2_ - GDPR compliance:** The tool **_MUST NOT_** upload data if the user has not agreed to share this information.
### _TOPIC-7_ - Testing
- **_TST-7.1_ - Unit Tests:** Comprehensive unit tests **_SHOULD_** be written to cover major functionalities.
- **_TST-7.2_ - Integration Tests:** Integration tests **_SHOULD_** be conducted to ensure all parts of the tool work together seamlessly.
## Definitions
- Default Journal Directory: /run/log/journal/<machine_id>
- Machine ID can be found at /etc/machine-id
- Default Output Directory: /run/log/filtered-journal
## Config Defaults
- **Journal Directory**
- Type: Path
- **Required**: This value **_MUST_** be provided as a start parameter.
- **Output Directory**
- Type: Path
- **Required**: This value **_MUST_** be provided as a start parameter.
- **Trigger Priority**
- Type: Enum
- Valid Values: _Emergency, Alert, Critical, Error, Warning, Notice, Info, Debug_
- Default Value: _Warning_
- **Journal Context**
- Type: Integer
- Unit: Seconds
- Default Value: _15_
- **Max File Size**
- Type: Integer
- Unit: Bytes
- Default Value: _8388608_ (8 MB)
- **Max Directory Size**
- Type: Integer
- Unit: Bytes
- Default Value: _75497472_ (72 MB)
- **File Monitoring Interval**
- Type: Integer
- Unit: Seconds
- Default Value: _10_

BIN
out.pdf

Binary file not shown.

226
req.yml
View File

@ -1,203 +1,47 @@
name: journal-uploader name: Req
version: 1.0.0 version: 1.0.0
description: |- description: |-
The journal-uploader has two main functionalities. The project has the following functionalities:
- Take a stream of log messages and filter them depending on their severity - Output the schema that is used to specify requirements
- Upload journal logs for a specified time when activated through cloud call - Convert requirements from one of the allowed text formats to Markdown
- Convert requirements from one of the allowed text formats to HTML
- Check test output for requirements and output a summary of requirement test status
topics: topics:
TOPIC-1: TOPIC-1:
name: Journal Watcher name: Output Data
subtopics:
SUB-1.1:
name: File Monitoring
requirements: requirements:
REQ-1.1.1: REQ-1.1:
name: Continuous Monitoring name: Output Json Schema
description: The tool must continuously monitor a designated directory. description: The tool must be able to print a valid JSON schema of the input format
SUB-1.2: REQ-1.2:
name: File Detection name: Demo Data
requirements: description: The tool should be able to output a valid YAML string to be used as a starting point
REQ-1.2.1:
name: Detection of New Files
description: The tool must detect the addition of new files in the monitored directory.
REQ-1.2.2:
name: Avoid Re-processing
description: The tool must not process files that have already been processed.
TOPIC-2: TOPIC-2:
name: Traced Logging name: Reading Requirement Files
subtopics:
SUB-2.1:
name: File Processing
requirements: requirements:
REQ-2.1.1: REQ-2.1:
name: Reading Log Messages name: Parsing From Multiple Data Formats
description: When a new file is processed, each log message should be put into a buffer. description: 'The tool must be able to read requirements in the following formats:'
REQ-2.1.2: additional_info:
name: Filtering Log Messages - YAML
description: |- - JSON
The tool will search for messages of a defined priority (Trigger Priority). - RSN
Each message of this priority, as well as all messages before and after, which are inside a defined timespan, must - TOML
get written into a file. Every other message should gets dropped.
REQ-2.1.3:
name: No Duplicate Log Messages
description: The tool shall make sure that no log entry will be written to the file twice.
SUB-2.2:
name: Traced Log Rotation
requirements:
REQ-2.2.1:
name: Rotating Files
description: |-
When the size of the current traced log file exceeds a certain threshold,
it must be closed and a new file must be opened for writing.
REQ-2.2.2:
name: Compression of Rotated Files
description: Each traced log file must get compressed after it got rotated.
REQ-2.2.3:
name: Rotating Directory
description: |-
When the directory size exceeds a certain threshold, the tool must delete the oldest
files in the directory, until the size is below the threshold again.
TOPIC-3: TOPIC-3:
name: Remote Journal Logging
subtopics:
SUB-3.1:
name: Service Activation
requirements:
REQ-3.1.1:
name: Cloud Activation
description: |-
The remote journal logging shall be startable through a function call from the cloud.
The api call has the duration and max interval as arguments.
REQ-3.1.2:
name: Duration
description: The remote journal logging should stay active, until it reaches the specified duration.
REQ-3.1.3:
name: Max Interval
description: |-
If no upload was done after the amount of time specified in max interval,
a log rotation shall be triggered, which will in turn get picked up by the file monitoring.
REQ-3.1.4:
name: Analytics Not Accepted
description: |-
If the user has not accepted the usage of their data, the cloud call must
result in an error.
SUB-3.2:
name: File Processing name: File Processing
requirements: requirements:
REQ-3.2.1: REQ-3.1:
name: File Upload name: Pretty Print To Markdown
description: When a file gets detected, it should get uploaded to the cloud. description: The tool must be able to produce Markdown, containing all the relevant data from the input data
REQ-3.2.2: REQ-3.2:
name: No Duplicate Files name: Pretty Print to HTML
description: Already processed files must not get uploaded again. description: The tool must be able to produce HTML, containing all the relevant data from the input data
REQ-3.2.3: REQ-3.3:
name: Revoking Analytics name: Analyze Test Output
description: |- description: |
If the user revokes the usage of their data, the service may continue running The tool must be able to scan text files for requirement IDs and create a summary of the test status of the defined requirements.
but must not upload any data until the user allows the usage of their data again. The IDs must be in one of the following formats, where <ID> is a placeholder for the real id:
REQ-3.2.4:
name: Duration Expired
description: After the specified duration is expired, the service should stop uploading files.
TOPIC-4:
name: Configuration
requirements:
CONF-4.1:
name: Journal Directory
description: Users should be able to specify the directory to be monitored for journal files.
CONF-4.2:
name: Output Directory
description: Users should be able to specify the directory into which the final files will be written.
CONF-4.3:
name: Trigger Priority
description: Users should be able to specify which priority triggers the filtering.
CONF-4.4:
name: Journal Context
description: Users should be able to specify how many seconds of context will be added
to traced logs when encountering a trigger priority.
CONF-4.5:
name: Max File Size
description: Users should be able to specify the max file size, at which a file gets rotated.
CONF-4.6:
name: Max Directory Size
description: Users should be able to specify the max directory size, at which a directory gets rotated.
CONF-4.7:
name: File Monitoring Interval
description: |-
Users should be able to specify an interval, which should change
how long the tool waits before checking if new files are available.
TOPIC-5:
name: Performance Requirements
requirements:
PERF-5.1:
name: Efficiency
description: The tool should efficiently monitor and process files without excessive resource consumption.
PERF-5.2:
name: Interval Delay
description: The tool should do its work with no more than 10 seconds delay after its interval.
TOPIC-6:
name: Security & Data Protection
requirements:
SEC-6.1:
name: No Insecure Connection
description: The tool must send data only through a secure connection.
SEC-6.2:
name: GDPR compliance
description: The tool must not upload data if the user has not agreed to share this information.
TOPIC-7:
name: Testing
requirements:
TST-7.1:
name: Unit Tests
description: Comprehensive unit tests should be written to cover major functionalities.
TST-7.2:
name: Integration Tests
description: Integration tests should be conducted to ensure all parts of the tool work together seamlessly.
definitions:
- name: Default Journal Directory
value: /run/log/journal/<machine_id>
additional_info: additional_info:
- Machine ID can be found at /etc/machine-id - "<ID>: success"
- name: Default Output Directory - "<ID>: failed"
value: /run/log/filtered-journal
config_defaults:
- name: Journal Directory
type: Path
- name: Output Directory
type: Path
- name: Trigger Priority
type: Enum
valid_values:
- Emergency
- Alert
- Critical
- Error
- Warning
- Notice
- Info
- Debug
default_value: Warning
- name: Journal Context
type: Integer
unit: Seconds
default_value: '15'
- name: Max File Size
type: Integer
unit: Bytes
default_value: '8388608'
hint: (8 MB)
- name: Max Directory Size
type: Integer
unit: Bytes
default_value: '75497472'
hint: (72 MB)
- name: File Monitoring Interval
type: Integer
unit: Seconds
default_value: '10'

View File

@ -1,10 +1,9 @@
use std::fmt; use std::fmt;
use indexmap::{indexmap, IndexMap}; use indexmap::IndexMap;
use schemars::JsonSchema; use schemars::JsonSchema;
use serde::de::{self, Unexpected, Visitor}; use serde::de::{self, Unexpected, Visitor};
use serde::{Deserialize, Deserializer, Serialize, Serializer}; use serde::{Deserialize, Deserializer, Serialize, Serializer};
use stringlit::s;
pub fn my_trim<S>(v: &str, s: S) -> Result<S::Ok, S::Error> pub fn my_trim<S>(v: &str, s: S) -> Result<S::Ok, S::Error>
where where
@ -19,7 +18,7 @@ pub struct Requirement {
#[serde(serialize_with = "my_trim")] #[serde(serialize_with = "my_trim")]
pub description: String, pub description: String,
#[serde(default, skip_serializing_if = "Vec::is_empty")] #[serde(default, skip_serializing_if = "Vec::is_empty")]
pub requires: Vec<String>, pub additional_info: Vec<String>,
} }
#[derive(JsonSchema, Debug, Deserialize, Serialize)] #[derive(JsonSchema, Debug, Deserialize, Serialize)]
@ -137,131 +136,7 @@ pub struct Project {
pub config_defaults: Vec<ConfigDefault>, pub config_defaults: Vec<ConfigDefault>,
} }
#[must_use]
pub fn demo_project() -> Project { pub fn demo_project() -> Project {
Project { serde_yaml::from_str(include_str!("../req.yml")).expect("Should never happen!")
name: s!("journal-uploader"),
version: Version {
major: 1,
minor: 0,
patch: 0,
},
description: s!(r"
The journal-uploader has two main functionalities.
- Take a stream of log messages and filter them depending on their severity
- Upload journal logs for a specified time when activated through cloud call"),
topics: indexmap! {
s!("FEAT-1") => Topic {
name: s!("Traced Logging"),
subtopics: indexmap! {
s!("SUB-1") => Topic {
name: s!("File Monitoring"),
requirements: indexmap! {
s!("REQ-1") => Requirement {
name: s!("Continuous Monitoring"),
description: s!(r"The tool must continuously monitor a designated directory."),
requires: vec! [],
}
},
subtopics: indexmap! {}
},
s!("SUB-2") => Topic {
name: s!("File Detection"),
requirements: indexmap! {
s!("REQ-1") => Requirement {
name: s!("Detection of New Files"),
description: s!(r"The tool must detect the addition of new files in the monitored directory."),
requires: vec! [],
},
s!("REQ-2") => Requirement {
name: s!("Avoid Re-processing"),
description: s!(r"The tool must not process files that have already been processed."),
requires: vec! [],
}
},
subtopics: indexmap! {}
},
},
requirements: indexmap! {},
}
},
definitions: vec![
Definition {
name: s!("Default Journal Directory"),
value: s!("/run/log/journal/<machine_id>"),
additional_info: vec![s!("Machine ID can be found at /etc/machine-id")],
},
Definition {
name: s!("Default Output Directory"),
value: s!("/run/log/filtered-journal"),
additional_info: vec![],
},
],
config_defaults: vec![
ConfigDefault {
name: s!("Journal Directory"),
typ: s!("Path"),
unit: None,
valid_values: None,
default_value: None,
hint: None,
},
ConfigDefault {
name: s!("Output Directory"),
typ: s!("Path"),
unit: None,
valid_values: None,
default_value: None,
hint: None,
},
ConfigDefault {
name: s!("Trigger Priority"),
typ: s!("Enum"),
unit: None,
valid_values: Some(vec![
s!("Emergency"),
s!("Alert"),
s!("Critical"),
s!("Error"),
s!("Warning"),
s!("Notice"),
s!("Info"),
s!("Debug"),
]),
default_value: Some(s!("Warning")),
hint: None,
},
ConfigDefault {
name: s!("Journal Context"),
typ: s!("Integer"),
unit: Some(s!("Seconds")),
valid_values: None,
default_value: Some(s!("15")),
hint: None,
},
ConfigDefault {
name: s!("Max File Size"),
typ: s!("Integer"),
unit: Some(s!("Bytes")),
valid_values: None,
default_value: Some(s!("8388608")),
hint: Some(s!("(8 MB)")),
},
ConfigDefault {
name: s!("Max Directory Size"),
typ: s!("Integer"),
unit: Some(s!("Bytes")),
valid_values: None,
default_value: Some(s!("75497472")),
hint: Some(s!("(72 MB)")),
},
ConfigDefault {
name: s!("File Monitoring Interval"),
typ: s!("Integer"),
unit: Some(s!("Seconds")),
valid_values: None,
default_value: Some(s!("10")),
hint: None,
},
],
}
} }

View File

@ -41,14 +41,18 @@ fn check_requirements(
) { ) {
for (id, requirement) in requirements { for (id, requirement) in requirements {
if allowed_requirements.is_match(id) { if allowed_requirements.is_match(id) {
let status = if test_results.contains(&format!("{id} succeeded")) { let status = if test_results.contains(&format!("{} succeeded", id.trim())) {
":white_check_mark:" ":white_check_mark:"
} else if test_results.contains(&format!("{id} failed")) { } else if test_results.contains(&format!("{} failed", id.trim())) {
":x:" ":x:"
} else { } else {
":warning:" ":warning:"
}; };
output.push(format!("- _{id}_ - {}: {status}", requirement.name)); output.push(format!(
"- _{}_ - {}: {status}",
id.trim(),
requirement.name
));
} }
} }
} }
@ -83,7 +87,12 @@ fn check_topics(
{ {
continue; continue;
} }
output.push(format!("{} _{id}_ - {}", "#".repeat(level), topic.name)); output.push(format!(
"{} _{}_ - {}",
"#".repeat(level),
id.trim(),
topic.name
));
if !topic.requirements.is_empty() { if !topic.requirements.is_empty() {
check_requirements( check_requirements(
test_results, test_results,
@ -109,18 +118,23 @@ fn check_topics(
fn add_requirements(output: &mut Vec<String>, requirements: &IndexMap<String, Requirement>) { fn add_requirements(output: &mut Vec<String>, requirements: &IndexMap<String, Requirement>) {
for (id, requirement) in requirements { for (id, requirement) in requirements {
output.push(format!( output.push(format!(
"- **_{id}_ - {}:** {}", "- **_{}_ - {}:** {}",
id.trim(),
requirement.name.trim(), requirement.name.trim(),
requirement.description.trim() requirement.description.trim()
)); ));
for info in &requirement.additional_info {
output.push(format!(" - {}", info.trim(),));
}
} }
} }
fn add_topics(output: &mut Vec<String>, topics: &IndexMap<String, Topic>, level: usize) { fn add_topics(output: &mut Vec<String>, topics: &IndexMap<String, Topic>, level: usize) {
for (id, topic) in topics { for (id, topic) in topics {
output.push(format!( output.push(format!(
"{} _{id}_ - {}", "{} _{}_ - {}",
"#".repeat(level), "#".repeat(level),
id.trim(),
topic.name.trim() topic.name.trim()
)); ));
if !topic.requirements.is_empty() { if !topic.requirements.is_empty() {
@ -141,6 +155,9 @@ enum Command {
Markdown { Markdown {
requirements: PathBuf, requirements: PathBuf,
}, },
Html {
requirements: PathBuf,
},
Check { Check {
#[arg(short, long, default_value = "REQ-.*")] #[arg(short, long, default_value = "REQ-.*")]
allowed_requirements: String, allowed_requirements: String,
@ -155,18 +172,15 @@ struct Args {
command: Command, command: Command,
} }
fn main() -> anyhow::Result<()> { fn parse(value: &str) -> anyhow::Result<Project> {
let Args { command } = Args::parse(); Ok(serde_yaml::from_str(value)
match command { .or_else(|_| serde_json::from_str(value))
Command::Demo => { .or_else(|_| rsn::from_str(value))
println!("{}", serde_yaml::to_string(&demo_project())?); .or_else(|_| toml::from_str(value))?)
} }
Command::Schema => {
let schema = schema_for!(Project); fn to_markdown(requirements: PathBuf) -> anyhow::Result<String> {
println!("{}", serde_json::to_string_pretty(&schema).unwrap()); let project: Project = parse(&std::fs::read_to_string(requirements)?)?;
}
Command::Markdown { requirements } => {
let project: Project = serde_yaml::from_str(&std::fs::read_to_string(requirements)?)?;
let mut output = vec![ let mut output = vec![
format!("# Requirements for {}", project.name.trim()), format!("# Requirements for {}", project.name.trim()),
@ -228,7 +242,10 @@ fn main() -> anyhow::Result<()> {
} else { } else {
output.push(format!( output.push(format!(
" - **Required**: This value **_MUST_** be provided as a start parameter.{}", " - **Required**: This value **_MUST_** be provided as a start parameter.{}",
default.hint.map(|h| format!(" {}", h.trim())).unwrap_or_default() default
.hint
.map(|h| format!(" {}", h.trim()))
.unwrap_or_default()
)); ));
} }
output.push(nl()); output.push(nl());
@ -239,7 +256,29 @@ fn main() -> anyhow::Result<()> {
for word in HIGHLIGHTED_WORDS { for word in HIGHLIGHTED_WORDS {
output = output.replace(word, &format!("**_{}_**", word.to_uppercase())); output = output.replace(word, &format!("**_{}_**", word.to_uppercase()));
} }
Ok(output)
}
fn main() -> anyhow::Result<()> {
let Args { command } = Args::parse();
match command {
Command::Demo => {
println!("{}", serde_yaml::to_string(&demo_project())?);
}
Command::Html { requirements } => {
let output = to_markdown(requirements)?;
println!(
"{}",
markdown::to_html_with_options(&output, &markdown::Options::gfm())
.map_err(|e| anyhow::anyhow!("{e}"))?
);
}
Command::Schema => {
let schema = schema_for!(Project);
println!("{}", serde_json::to_string_pretty(&schema).unwrap());
}
Command::Markdown { requirements } => {
let output = to_markdown(requirements)?;
println!("{output}"); println!("{output}");
} }
Command::Check { Command::Check {
@ -249,7 +288,7 @@ fn main() -> anyhow::Result<()> {
} => { } => {
let re = Regex::new(&allowed_requirements).unwrap(); let re = Regex::new(&allowed_requirements).unwrap();
let test_results = std::fs::read_to_string(test_results)?; let test_results = std::fs::read_to_string(test_results)?;
let project: Project = serde_yaml::from_str(&std::fs::read_to_string(requirements)?)?; let project: Project = parse(&std::fs::read_to_string(requirements)?)?;
let mut output = vec![format!("# Test Results - {}", project.name)]; let mut output = vec![format!("# Test Results - {}", project.name)];
check_topics(&test_results, &mut output, &project.topics, &re, 2); check_topics(&test_results, &mut output, &project.topics, &re, 2);

View File

@ -1,36 +0,0 @@
# Test Results - journal-uploader
## _TOPIC-1_ - Journal Watcher
### _SUB-1.1_ - File Monitoring
- _REQ-1.1.1_ - Continuous Monitoring: :white_check_mark:
### _SUB-1.2_ - File Detection
- _REQ-1.2.1_ - Detection of New Files: :white_check_mark:
- _REQ-1.2.2_ - Avoid Re-processing: :x:
## _TOPIC-2_ - Traced Logging
### _SUB-2.1_ - File Processing
- _REQ-2.1.1_ - Reading Log Messages: :white_check_mark:
- _REQ-2.1.2_ - Filtering Log Messages: :white_check_mark:
- _REQ-2.1.3_ - No Duplicate Log Messages: :x:
### _SUB-2.2_ - Traced Log Rotation
- _REQ-2.2.1_ - Rotating Files: :white_check_mark:
- _REQ-2.2.2_ - Compression of Rotated Files: :white_check_mark:
- _REQ-2.2.3_ - Rotating Directory: :x:
## _TOPIC-3_ - Remote Journal Logging
### _SUB-3.1_ - Service Activation
- _REQ-3.1.1_ - Cloud Activation: :white_check_mark:
- _REQ-3.1.2_ - Duration: :white_check_mark:
- _REQ-3.1.3_ - Max Interval: :x:
- _REQ-3.1.4_ - Analytics Not Accepted: :white_check_mark:
### _SUB-3.2_ - File Processing
- _REQ-3.2.1_ - File Upload: :white_check_mark:
- _REQ-3.2.2_ - No Duplicate Files: :x:
- _REQ-3.2.3_ - Revoking Analytics: :white_check_mark:
- _REQ-3.2.4_ - Duration Expired: :warning: