update
This commit is contained in:
parent
7c2540a185
commit
ed2b803ce5
|
|
@ -1 +1,2 @@
|
|||
/target
|
||||
/out
|
||||
|
|
|
|||
|
|
@ -332,6 +332,15 @@ dependencies = [
|
|||
"hashbrown",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "markdown"
|
||||
version = "1.0.0-alpha.17"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "21e27d6220ce21f80ce5c4201f23a37c6f1ad037c72c9d1ff215c2919605a5d6"
|
||||
dependencies = [
|
||||
"unicode-id",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "memchr"
|
||||
version = "2.7.2"
|
||||
|
|
@ -469,6 +478,7 @@ dependencies = [
|
|||
"clap",
|
||||
"crossterm 0.27.0",
|
||||
"indexmap",
|
||||
"markdown",
|
||||
"ratatui",
|
||||
"regex",
|
||||
"rsn",
|
||||
|
|
@ -745,6 +755,12 @@ dependencies = [
|
|||
"unicode-width",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "unicode-id"
|
||||
version = "0.3.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b1b6def86329695390197b82c1e244a54a131ceb66c996f2088a3876e2ae083f"
|
||||
|
||||
[[package]]
|
||||
name = "unicode-ident"
|
||||
version = "1.0.12"
|
||||
|
|
|
|||
|
|
@ -11,6 +11,7 @@ anyhow = "1.0.83"
|
|||
clap = { version = "4.5.4", features = ["derive"] }
|
||||
crossterm = "0.27.0"
|
||||
indexmap = { version = "2.2.6", features = ["serde"] }
|
||||
markdown = "1.0.0-alpha.17"
|
||||
ratatui = "0.26.2"
|
||||
regex = "1.10.4"
|
||||
rsn = "0.1.0"
|
||||
|
|
|
|||
|
|
@ -0,0 +1,12 @@
|
|||
#!/bin/bash
|
||||
|
||||
source_dir="$(dirname "${BASH_SOURCE[0]}")"
|
||||
pushd "$source_dir"
|
||||
|
||||
mkdir -p out
|
||||
cargo build
|
||||
cargo run -q -- schema > out/schema.json
|
||||
cargo run -q -- demo > out/demo.yml
|
||||
cargo run -q -- md req.yml > out/requirements.md
|
||||
cargo run -q -- html req.yml > out/requirements.html
|
||||
cargo run -q -- check req.yml test_result.txt > out/text_result.md
|
||||
126
orig.md
126
orig.md
|
|
@ -1,126 +0,0 @@
|
|||
# Requirements for journal-uploader
|
||||
|
||||
[[_TOC_]]
|
||||
|
||||
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED",
|
||||
"MAY", and "OPTIONAL" in this document are to be interpreted as described in
|
||||
[RFC 2119](https://datatracker.ietf.org/doc/html/rfc2119).
|
||||
|
||||
## Purpose
|
||||
The journal-uploader has two main functionalities.
|
||||
- Take a stream of log messages and filter them depending on their severity
|
||||
- Upload journal logs for a specified time when activated through cloud call
|
||||
|
||||
## Requirements
|
||||
### 1. Traced Logging
|
||||
#### 1.1 File Monitoring
|
||||
- **1.1.1 Continuous Monitoring:** The tool **_MUST_** continuously monitor a designated directory.
|
||||
|
||||
#### 1.2 File Detection
|
||||
- **1.2.1 Detection of New Files:** The tool **_MUST_** detect the addition of new files in the monitored directory.
|
||||
- **1.2.2 Avoid Re-processing:** The tool **_MUST NOT_** process files that have already been processed.
|
||||
|
||||
#### 1.3 File Processing
|
||||
- **1.3.1 Reading Log Messages:** When a new file is processed, each log message **_SHOULD_** be put into a buffer.
|
||||
- **1.3.2 Filtering Log Messages:** The tool will search for messages of a defined priority (Trigger Priority).
|
||||
Each message of this priority, as well as all messages before and after, which are inside a defined timespan, **_MUST_**
|
||||
get written into a file. Every other message **_SHOULD_** gets dropped.
|
||||
- **1.3.3 No Duplicate Log Messages:** The tool **_SHALL_** make sure that no log entry will be written to the file twice.
|
||||
|
||||
#### 1.4 Traced Log Rotation
|
||||
- **1.4.1 Rotating Files:** When the size of the current traced log file exceeds a certain threshold,
|
||||
it **_MUST_** be closed and a new file **_MUST_** be opened for writing.
|
||||
- **1.4.2 Compression of Rotated Files:** Each traced log file **_MUST_** get compressed after it got rotated.
|
||||
- **1.4.3 Rotating Directory:** When the directory size exceeds a certain threshold, the tool **_MUST_** delete the oldest
|
||||
files in the directory, until the size is below the threshold again.
|
||||
|
||||
### 2. Remote Journal Logging
|
||||
#### 2.1 Service Activation
|
||||
- **2.1.1 Cloud Activation:** The remote journal logging **_SHALL_** be startable through a function call from the cloud.
|
||||
The api call has the duration and max interval as arguments.
|
||||
- **2.1.2 Duration:** The remote journal logging **_SHOULD_** stay active, until it reaches the specified duration.
|
||||
- **2.1.3 Max Interval:** If no upload was done after the amount of time specified in max interval,
|
||||
a log rotation **_SHALL_** be triggered, which will in turn get picked up by the file monitoring.
|
||||
- **2.1.4 Analytics Not Accepted:** If the user has not accepted the usage of their data, the cloud call **_MUST_**
|
||||
result in an error.
|
||||
|
||||
#### 2.2 File Monitoring
|
||||
- **2.2.1 Continuous Monitoring:** The tool **_SHOULD_** continuously monitor a designated directory.
|
||||
|
||||
#### 2.3 File Detection
|
||||
- **2.3.1 Detection of New Files:** The tool **_MUST_** detect the addition of new files in the monitored directory.
|
||||
- **2.3.2 Avoid Re-processing:** The tool **_MUST NOT_** process files that have already been processed.
|
||||
|
||||
#### 2.4 File Processing
|
||||
- **2.4.1 File Upload:** When a file gets detected, it **_SHOULD_** get uploaded to the cloud.
|
||||
- **2.4.2 No Duplicate Files:** Already processed files **_MUST NOT_** get uploaded again.
|
||||
- **2.4.3 Revoking Analytics:** If the user revokes the usage of their data, the service **_MAY_** continue running
|
||||
but **_MUST NOT_** upload any data until the user allows the usage of their data again.
|
||||
- **2.4.4 Duration Expired:** After the specified duration is expired, the service **_SHOULD_** stop uploading files.
|
||||
|
||||
### 3. Configuration
|
||||
- **3.1 Configurable Journal Directory:** Users **_SHOULD_** be able to specify the directory to be monitored for
|
||||
journal files.
|
||||
- **3.2 Configurable Output Directory:** Users **_SHOULD_** be able to specify the directory into which the final files
|
||||
will be written.
|
||||
- **3.3 Configurable Trigger Priority:** Users **_SHOULD_** be able to specify which priority triggers the filtering.
|
||||
- **3.4 Configurable Journal Context:** Users **_SHOULD_** be able to specify how many seconds of context will be added
|
||||
to traced logs when encountering a trigger priority.
|
||||
- **3.5 Configurable Max File Size:** Users **_SHOULD_** be able to specify the max file size, at which a file gets rotated.
|
||||
- **3.6 Configurable Max Directory Size:** Users **_SHOULD_** be able to specify the max directory size, at which a
|
||||
directory gets rotated.
|
||||
- **3.7 Configurable File Monitoring Interval:** Users **_SHOULD_** be able to specify an interval, which **_SHOULD_** change
|
||||
how long the tool waits before checking if new files are available.
|
||||
|
||||
### 4. Performance Requirements
|
||||
- **4.1 Efficiency:** The tool **_SHOULD_** efficiently monitor and process files without excessive resource consumption.
|
||||
- **4.2 Interval Delay:** The tool **_SHOULD_** do its work with no more than 10 seconds delay after its interval.
|
||||
|
||||
### 5. Data Protection
|
||||
- **5.1 No Insecure Connection:** The tool **_MUST_** send data only through a secure connection.
|
||||
- **5.2 GDPR compliance:** The tool **_MUST NOT_** upload data if the user has not agreed to share this information.
|
||||
|
||||
### 6. Testing
|
||||
- **6.1 Unit Tests:** Comprehensive unit tests **_SHOULD_** be written to cover major functionalities.
|
||||
- **6.2 Integration Tests:** Integration tests **_SHOULD_** be conducted to ensure all parts of the tool work together
|
||||
seamlessly.
|
||||
|
||||
## Definitions
|
||||
- Default Journal Directory: /run/log/journal/<machine_id>
|
||||
- Machine ID can be found at /etc/machine-id
|
||||
- Default Output Directory: /run/log/filtered-journal
|
||||
|
||||
## Config Defaults
|
||||
- **Journal Directory**
|
||||
- type: Path
|
||||
- **Required**: This value **_MUST_** be provided as a start parameter.
|
||||
|
||||
- **Output Directory**
|
||||
- type: Path
|
||||
- **Required**: This value **_MUST_** be provided as a start parameter.
|
||||
|
||||
- **Trigger Priority**
|
||||
- type: Enum
|
||||
- Valid Values: _Emergency, Alert, Critical, Error, Warning, Notice, Info, Debug_
|
||||
- Default Value: _Warning_
|
||||
|
||||
- **Journal Context**
|
||||
- type: Integer
|
||||
- unit: Seconds
|
||||
- Default Value: _15_
|
||||
|
||||
- **Max File Size**
|
||||
- type: Integer
|
||||
- unit: Bytes
|
||||
- Default Value: _8388608_ (8 MB)
|
||||
|
||||
- **Max Directory Size**
|
||||
- type: Integer
|
||||
- unit: Bytes
|
||||
- Default Value: _75497472_ (72 MB)
|
||||
|
||||
- **File Monitoring Interval**
|
||||
- type: Integer
|
||||
- unit: Seconds
|
||||
- Default Value: _10_
|
||||
|
||||
120
out.md
120
out.md
|
|
@ -1,120 +0,0 @@
|
|||
# Requirements for journal-uploader
|
||||
|
||||
[[_TOC_]]
|
||||
|
||||
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED",
|
||||
"MAY", and "OPTIONAL" in this document are to be interpreted as described in
|
||||
[RFC 2119](https://datatracker.ietf.org/doc/html/rfc2119).
|
||||
|
||||
|
||||
## Description
|
||||
The journal-uploader has two main functionalities.
|
||||
- Take a stream of log messages and filter them depending on their severity
|
||||
- Upload journal logs for a specified time when activated through cloud call
|
||||
|
||||
## Requirements
|
||||
### _TOPIC-1_ - Journal Watcher
|
||||
#### _SUB-1.1_ - File Monitoring
|
||||
- **_REQ-1.1.1_ - Continuous Monitoring:** The tool **_MUST_** continuously monitor a designated directory.
|
||||
|
||||
#### _SUB-1.2_ - File Detection
|
||||
- **_REQ-1.2.1_ - Detection of New Files:** The tool **_MUST_** detect the addition of new files in the monitored directory.
|
||||
- **_REQ-1.2.2_ - Avoid Re-processing:** The tool **_MUST NOT_** process files that have already been processed.
|
||||
|
||||
|
||||
### _TOPIC-2_ - Traced Logging
|
||||
#### _SUB-2.1_ - File Processing
|
||||
- **_REQ-2.1.1_ - Reading Log Messages:** When a new file is processed, each log message **_SHOULD_** be put into a buffer.
|
||||
- **_REQ-2.1.2_ - Filtering Log Messages:** The tool will search for messages of a defined priority (Trigger Priority).
|
||||
Each message of this priority, as well as all messages before and after, which are inside a defined timespan, **_MUST_**
|
||||
get written into a file. Every other message **_SHOULD_** gets dropped.
|
||||
- **_REQ-2.1.3_ - No Duplicate Log Messages:** The tool **_SHALL_** make sure that no log entry will be written to the file twice.
|
||||
|
||||
#### _SUB-2.2_ - Traced Log Rotation
|
||||
- **_REQ-2.2.1_ - Rotating Files:** When the size of the current traced log file exceeds a certain threshold,
|
||||
it **_MUST_** be closed and a new file **_MUST_** be opened for writing.
|
||||
- **_REQ-2.2.2_ - Compression of Rotated Files:** Each traced log file **_MUST_** get compressed after it got rotated.
|
||||
- **_REQ-2.2.3_ - Rotating Directory:** When the directory size exceeds a certain threshold, the tool **_MUST_** delete the oldest
|
||||
files in the directory, until the size is below the threshold again.
|
||||
|
||||
|
||||
### _TOPIC-3_ - Remote Journal Logging
|
||||
#### _SUB-3.1_ - Service Activation
|
||||
- **_REQ-3.1.1_ - Cloud Activation:** The remote journal logging **_SHALL_** be startable through a function call from the cloud.
|
||||
The api call has the duration and max interval as arguments.
|
||||
- **_REQ-3.1.2_ - Duration:** The remote journal logging **_SHOULD_** stay active, until it reaches the specified duration.
|
||||
- **_REQ-3.1.3_ - Max Interval:** If no upload was done after the amount of time specified in max interval,
|
||||
a log rotation **_SHALL_** be triggered, which will in turn get picked up by the file monitoring.
|
||||
- **_REQ-3.1.4_ - Analytics Not Accepted:** If the user has not accepted the usage of their data, the cloud call **_MUST_**
|
||||
result in an error.
|
||||
|
||||
#### _SUB-3.2_ - File Processing
|
||||
- **_REQ-3.2.1_ - File Upload:** When a file gets detected, it **_SHOULD_** get uploaded to the cloud.
|
||||
- **_REQ-3.2.2_ - No Duplicate Files:** Already processed files **_MUST NOT_** get uploaded again.
|
||||
- **_REQ-3.2.3_ - Revoking Analytics:** If the user revokes the usage of their data, the service **_MAY_** continue running
|
||||
but **_MUST NOT_** upload any data until the user allows the usage of their data again.
|
||||
- **_REQ-3.2.4_ - Duration Expired:** After the specified duration is expired, the service **_SHOULD_** stop uploading files.
|
||||
|
||||
|
||||
### _TOPIC-4_ - Configuration
|
||||
- **_CONF-4.1_ - Journal Directory:** Users **_SHOULD_** be able to specify the directory to be monitored for journal files.
|
||||
- **_CONF-4.2_ - Output Directory:** Users **_SHOULD_** be able to specify the directory into which the final files will be written.
|
||||
- **_CONF-4.3_ - Trigger Priority:** Users **_SHOULD_** be able to specify which priority triggers the filtering.
|
||||
- **_CONF-4.4_ - Journal Context:** Users **_SHOULD_** be able to specify how many seconds of context will be added to traced logs when encountering a trigger priority.
|
||||
- **_CONF-4.5_ - Max File Size:** Users **_SHOULD_** be able to specify the max file size, at which a file gets rotated.
|
||||
- **_CONF-4.6_ - Max Directory Size:** Users **_SHOULD_** be able to specify the max directory size, at which a directory gets rotated.
|
||||
- **_CONF-4.7_ - File Monitoring Interval:** Users **_SHOULD_** be able to specify an interval, which **_SHOULD_** change
|
||||
how long the tool waits before checking if new files are available.
|
||||
|
||||
### _TOPIC-5_ - Performance Requirements
|
||||
- **_PERF-5.1_ - Efficiency:** The tool **_SHOULD_** efficiently monitor and process files without excessive resource consumption.
|
||||
- **_PERF-5.2_ - Interval Delay:** The tool **_SHOULD_** do its work with no more than 10 seconds delay after its interval.
|
||||
|
||||
### _TOPIC-6_ - Security & Data Protection
|
||||
- **_SEC-6.1_ - No Insecure Connection:** The tool **_MUST_** send data only through a secure connection.
|
||||
- **_SEC-6.2_ - GDPR compliance:** The tool **_MUST NOT_** upload data if the user has not agreed to share this information.
|
||||
|
||||
### _TOPIC-7_ - Testing
|
||||
- **_TST-7.1_ - Unit Tests:** Comprehensive unit tests **_SHOULD_** be written to cover major functionalities.
|
||||
- **_TST-7.2_ - Integration Tests:** Integration tests **_SHOULD_** be conducted to ensure all parts of the tool work together seamlessly.
|
||||
|
||||
|
||||
## Definitions
|
||||
- Default Journal Directory: /run/log/journal/<machine_id>
|
||||
- Machine ID can be found at /etc/machine-id
|
||||
- Default Output Directory: /run/log/filtered-journal
|
||||
|
||||
## Config Defaults
|
||||
- **Journal Directory**
|
||||
- Type: Path
|
||||
- **Required**: This value **_MUST_** be provided as a start parameter.
|
||||
|
||||
- **Output Directory**
|
||||
- Type: Path
|
||||
- **Required**: This value **_MUST_** be provided as a start parameter.
|
||||
|
||||
- **Trigger Priority**
|
||||
- Type: Enum
|
||||
- Valid Values: _Emergency, Alert, Critical, Error, Warning, Notice, Info, Debug_
|
||||
- Default Value: _Warning_
|
||||
|
||||
- **Journal Context**
|
||||
- Type: Integer
|
||||
- Unit: Seconds
|
||||
- Default Value: _15_
|
||||
|
||||
- **Max File Size**
|
||||
- Type: Integer
|
||||
- Unit: Bytes
|
||||
- Default Value: _8388608_ (8 MB)
|
||||
|
||||
- **Max Directory Size**
|
||||
- Type: Integer
|
||||
- Unit: Bytes
|
||||
- Default Value: _75497472_ (72 MB)
|
||||
|
||||
- **File Monitoring Interval**
|
||||
- Type: Integer
|
||||
- Unit: Seconds
|
||||
- Default Value: _10_
|
||||
|
||||
234
req.yml
234
req.yml
|
|
@ -1,203 +1,47 @@
|
|||
name: journal-uploader
|
||||
name: Req
|
||||
version: 1.0.0
|
||||
description: |-
|
||||
The journal-uploader has two main functionalities.
|
||||
- Take a stream of log messages and filter them depending on their severity
|
||||
- Upload journal logs for a specified time when activated through cloud call
|
||||
The project has the following functionalities:
|
||||
- Output the schema that is used to specify requirements
|
||||
- Convert requirements from one of the allowed text formats to Markdown
|
||||
- Convert requirements from one of the allowed text formats to HTML
|
||||
- Check test output for requirements and output a summary of requirement test status
|
||||
topics:
|
||||
TOPIC-1:
|
||||
name: Journal Watcher
|
||||
subtopics:
|
||||
SUB-1.1:
|
||||
name: File Monitoring
|
||||
requirements:
|
||||
REQ-1.1.1:
|
||||
name: Continuous Monitoring
|
||||
description: The tool must continuously monitor a designated directory.
|
||||
SUB-1.2:
|
||||
name: File Detection
|
||||
requirements:
|
||||
REQ-1.2.1:
|
||||
name: Detection of New Files
|
||||
description: The tool must detect the addition of new files in the monitored directory.
|
||||
REQ-1.2.2:
|
||||
name: Avoid Re-processing
|
||||
description: The tool must not process files that have already been processed.
|
||||
|
||||
name: Output Data
|
||||
requirements:
|
||||
REQ-1.1:
|
||||
name: Output Json Schema
|
||||
description: The tool must be able to print a valid JSON schema of the input format
|
||||
REQ-1.2:
|
||||
name: Demo Data
|
||||
description: The tool should be able to output a valid YAML string to be used as a starting point
|
||||
TOPIC-2:
|
||||
name: Traced Logging
|
||||
subtopics:
|
||||
SUB-2.1:
|
||||
name: File Processing
|
||||
requirements:
|
||||
REQ-2.1.1:
|
||||
name: Reading Log Messages
|
||||
description: When a new file is processed, each log message should be put into a buffer.
|
||||
REQ-2.1.2:
|
||||
name: Filtering Log Messages
|
||||
description: |-
|
||||
The tool will search for messages of a defined priority (Trigger Priority).
|
||||
Each message of this priority, as well as all messages before and after, which are inside a defined timespan, must
|
||||
get written into a file. Every other message should gets dropped.
|
||||
REQ-2.1.3:
|
||||
name: No Duplicate Log Messages
|
||||
description: The tool shall make sure that no log entry will be written to the file twice.
|
||||
SUB-2.2:
|
||||
name: Traced Log Rotation
|
||||
requirements:
|
||||
REQ-2.2.1:
|
||||
name: Rotating Files
|
||||
description: |-
|
||||
When the size of the current traced log file exceeds a certain threshold,
|
||||
it must be closed and a new file must be opened for writing.
|
||||
REQ-2.2.2:
|
||||
name: Compression of Rotated Files
|
||||
description: Each traced log file must get compressed after it got rotated.
|
||||
REQ-2.2.3:
|
||||
name: Rotating Directory
|
||||
description: |-
|
||||
When the directory size exceeds a certain threshold, the tool must delete the oldest
|
||||
files in the directory, until the size is below the threshold again.
|
||||
name: Reading Requirement Files
|
||||
requirements:
|
||||
REQ-2.1:
|
||||
name: Parsing From Multiple Data Formats
|
||||
description: 'The tool must be able to read requirements in the following formats:'
|
||||
additional_info:
|
||||
- YAML
|
||||
- JSON
|
||||
- RSN
|
||||
- TOML
|
||||
|
||||
TOPIC-3:
|
||||
name: Remote Journal Logging
|
||||
subtopics:
|
||||
SUB-3.1:
|
||||
name: Service Activation
|
||||
requirements:
|
||||
REQ-3.1.1:
|
||||
name: Cloud Activation
|
||||
description: |-
|
||||
The remote journal logging shall be startable through a function call from the cloud.
|
||||
The api call has the duration and max interval as arguments.
|
||||
REQ-3.1.2:
|
||||
name: Duration
|
||||
description: The remote journal logging should stay active, until it reaches the specified duration.
|
||||
REQ-3.1.3:
|
||||
name: Max Interval
|
||||
description: |-
|
||||
If no upload was done after the amount of time specified in max interval,
|
||||
a log rotation shall be triggered, which will in turn get picked up by the file monitoring.
|
||||
REQ-3.1.4:
|
||||
name: Analytics Not Accepted
|
||||
description: |-
|
||||
If the user has not accepted the usage of their data, the cloud call must
|
||||
result in an error.
|
||||
SUB-3.2:
|
||||
name: File Processing
|
||||
requirements:
|
||||
REQ-3.2.1:
|
||||
name: File Upload
|
||||
description: When a file gets detected, it should get uploaded to the cloud.
|
||||
REQ-3.2.2:
|
||||
name: No Duplicate Files
|
||||
description: Already processed files must not get uploaded again.
|
||||
REQ-3.2.3:
|
||||
name: Revoking Analytics
|
||||
description: |-
|
||||
If the user revokes the usage of their data, the service may continue running
|
||||
but must not upload any data until the user allows the usage of their data again.
|
||||
REQ-3.2.4:
|
||||
name: Duration Expired
|
||||
description: After the specified duration is expired, the service should stop uploading files.
|
||||
|
||||
TOPIC-4:
|
||||
name: Configuration
|
||||
name: File Processing
|
||||
requirements:
|
||||
CONF-4.1:
|
||||
name: Journal Directory
|
||||
description: Users should be able to specify the directory to be monitored for journal files.
|
||||
CONF-4.2:
|
||||
name: Output Directory
|
||||
description: Users should be able to specify the directory into which the final files will be written.
|
||||
CONF-4.3:
|
||||
name: Trigger Priority
|
||||
description: Users should be able to specify which priority triggers the filtering.
|
||||
CONF-4.4:
|
||||
name: Journal Context
|
||||
description: Users should be able to specify how many seconds of context will be added
|
||||
to traced logs when encountering a trigger priority.
|
||||
CONF-4.5:
|
||||
name: Max File Size
|
||||
description: Users should be able to specify the max file size, at which a file gets rotated.
|
||||
CONF-4.6:
|
||||
name: Max Directory Size
|
||||
description: Users should be able to specify the max directory size, at which a directory gets rotated.
|
||||
CONF-4.7:
|
||||
name: File Monitoring Interval
|
||||
description: |-
|
||||
Users should be able to specify an interval, which should change
|
||||
how long the tool waits before checking if new files are available.
|
||||
|
||||
TOPIC-5:
|
||||
name: Performance Requirements
|
||||
requirements:
|
||||
PERF-5.1:
|
||||
name: Efficiency
|
||||
description: The tool should efficiently monitor and process files without excessive resource consumption.
|
||||
PERF-5.2:
|
||||
name: Interval Delay
|
||||
description: The tool should do its work with no more than 10 seconds delay after its interval.
|
||||
|
||||
TOPIC-6:
|
||||
name: Security & Data Protection
|
||||
requirements:
|
||||
SEC-6.1:
|
||||
name: No Insecure Connection
|
||||
description: The tool must send data only through a secure connection.
|
||||
SEC-6.2:
|
||||
name: GDPR compliance
|
||||
description: The tool must not upload data if the user has not agreed to share this information.
|
||||
|
||||
TOPIC-7:
|
||||
name: Testing
|
||||
requirements:
|
||||
TST-7.1:
|
||||
name: Unit Tests
|
||||
description: Comprehensive unit tests should be written to cover major functionalities.
|
||||
TST-7.2:
|
||||
name: Integration Tests
|
||||
description: Integration tests should be conducted to ensure all parts of the tool work together seamlessly.
|
||||
|
||||
definitions:
|
||||
- name: Default Journal Directory
|
||||
value: /run/log/journal/<machine_id>
|
||||
additional_info:
|
||||
- Machine ID can be found at /etc/machine-id
|
||||
- name: Default Output Directory
|
||||
value: /run/log/filtered-journal
|
||||
config_defaults:
|
||||
- name: Journal Directory
|
||||
type: Path
|
||||
- name: Output Directory
|
||||
type: Path
|
||||
- name: Trigger Priority
|
||||
type: Enum
|
||||
valid_values:
|
||||
- Emergency
|
||||
- Alert
|
||||
- Critical
|
||||
- Error
|
||||
- Warning
|
||||
- Notice
|
||||
- Info
|
||||
- Debug
|
||||
default_value: Warning
|
||||
- name: Journal Context
|
||||
type: Integer
|
||||
unit: Seconds
|
||||
default_value: '15'
|
||||
- name: Max File Size
|
||||
type: Integer
|
||||
unit: Bytes
|
||||
default_value: '8388608'
|
||||
hint: (8 MB)
|
||||
- name: Max Directory Size
|
||||
type: Integer
|
||||
unit: Bytes
|
||||
default_value: '75497472'
|
||||
hint: (72 MB)
|
||||
- name: File Monitoring Interval
|
||||
type: Integer
|
||||
unit: Seconds
|
||||
default_value: '10'
|
||||
REQ-3.1:
|
||||
name: Pretty Print To Markdown
|
||||
description: The tool must be able to produce Markdown, containing all the relevant data from the input data
|
||||
REQ-3.2:
|
||||
name: Pretty Print to HTML
|
||||
description: The tool must be able to produce HTML, containing all the relevant data from the input data
|
||||
REQ-3.3:
|
||||
name: Analyze Test Output
|
||||
description: |
|
||||
The tool must be able to scan text files for requirement IDs and create a summary of the test status of the defined requirements.
|
||||
The IDs must be in one of the following formats, where <ID> is a placeholder for the real id:
|
||||
additional_info:
|
||||
- "<ID>: success"
|
||||
- "<ID>: failed"
|
||||
|
|
|
|||
133
src/lib.rs
133
src/lib.rs
|
|
@ -1,10 +1,9 @@
|
|||
use std::fmt;
|
||||
|
||||
use indexmap::{indexmap, IndexMap};
|
||||
use indexmap::IndexMap;
|
||||
use schemars::JsonSchema;
|
||||
use serde::de::{self, Unexpected, Visitor};
|
||||
use serde::{Deserialize, Deserializer, Serialize, Serializer};
|
||||
use stringlit::s;
|
||||
|
||||
pub fn my_trim<S>(v: &str, s: S) -> Result<S::Ok, S::Error>
|
||||
where
|
||||
|
|
@ -19,7 +18,7 @@ pub struct Requirement {
|
|||
#[serde(serialize_with = "my_trim")]
|
||||
pub description: String,
|
||||
#[serde(default, skip_serializing_if = "Vec::is_empty")]
|
||||
pub requires: Vec<String>,
|
||||
pub additional_info: Vec<String>,
|
||||
}
|
||||
|
||||
#[derive(JsonSchema, Debug, Deserialize, Serialize)]
|
||||
|
|
@ -137,131 +136,7 @@ pub struct Project {
|
|||
pub config_defaults: Vec<ConfigDefault>,
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn demo_project() -> Project {
|
||||
Project {
|
||||
name: s!("journal-uploader"),
|
||||
version: Version {
|
||||
major: 1,
|
||||
minor: 0,
|
||||
patch: 0,
|
||||
},
|
||||
description: s!(r"
|
||||
The journal-uploader has two main functionalities.
|
||||
- Take a stream of log messages and filter them depending on their severity
|
||||
- Upload journal logs for a specified time when activated through cloud call"),
|
||||
topics: indexmap! {
|
||||
s!("FEAT-1") => Topic {
|
||||
name: s!("Traced Logging"),
|
||||
subtopics: indexmap! {
|
||||
s!("SUB-1") => Topic {
|
||||
name: s!("File Monitoring"),
|
||||
requirements: indexmap! {
|
||||
s!("REQ-1") => Requirement {
|
||||
name: s!("Continuous Monitoring"),
|
||||
description: s!(r"The tool must continuously monitor a designated directory."),
|
||||
requires: vec! [],
|
||||
}
|
||||
},
|
||||
subtopics: indexmap! {}
|
||||
},
|
||||
s!("SUB-2") => Topic {
|
||||
name: s!("File Detection"),
|
||||
requirements: indexmap! {
|
||||
s!("REQ-1") => Requirement {
|
||||
name: s!("Detection of New Files"),
|
||||
description: s!(r"The tool must detect the addition of new files in the monitored directory."),
|
||||
requires: vec! [],
|
||||
},
|
||||
s!("REQ-2") => Requirement {
|
||||
name: s!("Avoid Re-processing"),
|
||||
description: s!(r"The tool must not process files that have already been processed."),
|
||||
requires: vec! [],
|
||||
}
|
||||
},
|
||||
subtopics: indexmap! {}
|
||||
},
|
||||
},
|
||||
requirements: indexmap! {},
|
||||
}
|
||||
},
|
||||
definitions: vec![
|
||||
Definition {
|
||||
name: s!("Default Journal Directory"),
|
||||
value: s!("/run/log/journal/<machine_id>"),
|
||||
additional_info: vec![s!("Machine ID can be found at /etc/machine-id")],
|
||||
},
|
||||
Definition {
|
||||
name: s!("Default Output Directory"),
|
||||
value: s!("/run/log/filtered-journal"),
|
||||
additional_info: vec![],
|
||||
},
|
||||
],
|
||||
config_defaults: vec![
|
||||
ConfigDefault {
|
||||
name: s!("Journal Directory"),
|
||||
typ: s!("Path"),
|
||||
unit: None,
|
||||
valid_values: None,
|
||||
default_value: None,
|
||||
hint: None,
|
||||
},
|
||||
ConfigDefault {
|
||||
name: s!("Output Directory"),
|
||||
typ: s!("Path"),
|
||||
unit: None,
|
||||
valid_values: None,
|
||||
default_value: None,
|
||||
hint: None,
|
||||
},
|
||||
ConfigDefault {
|
||||
name: s!("Trigger Priority"),
|
||||
typ: s!("Enum"),
|
||||
unit: None,
|
||||
valid_values: Some(vec![
|
||||
s!("Emergency"),
|
||||
s!("Alert"),
|
||||
s!("Critical"),
|
||||
s!("Error"),
|
||||
s!("Warning"),
|
||||
s!("Notice"),
|
||||
s!("Info"),
|
||||
s!("Debug"),
|
||||
]),
|
||||
default_value: Some(s!("Warning")),
|
||||
hint: None,
|
||||
},
|
||||
ConfigDefault {
|
||||
name: s!("Journal Context"),
|
||||
typ: s!("Integer"),
|
||||
unit: Some(s!("Seconds")),
|
||||
valid_values: None,
|
||||
default_value: Some(s!("15")),
|
||||
hint: None,
|
||||
},
|
||||
ConfigDefault {
|
||||
name: s!("Max File Size"),
|
||||
typ: s!("Integer"),
|
||||
unit: Some(s!("Bytes")),
|
||||
valid_values: None,
|
||||
default_value: Some(s!("8388608")),
|
||||
hint: Some(s!("(8 MB)")),
|
||||
},
|
||||
ConfigDefault {
|
||||
name: s!("Max Directory Size"),
|
||||
typ: s!("Integer"),
|
||||
unit: Some(s!("Bytes")),
|
||||
valid_values: None,
|
||||
default_value: Some(s!("75497472")),
|
||||
hint: Some(s!("(72 MB)")),
|
||||
},
|
||||
ConfigDefault {
|
||||
name: s!("File Monitoring Interval"),
|
||||
typ: s!("Integer"),
|
||||
unit: Some(s!("Seconds")),
|
||||
valid_values: None,
|
||||
default_value: Some(s!("10")),
|
||||
hint: None,
|
||||
},
|
||||
],
|
||||
}
|
||||
serde_yaml::from_str(include_str!("../req.yml")).expect("Should never happen!")
|
||||
}
|
||||
|
|
|
|||
201
src/main.rs
201
src/main.rs
|
|
@ -41,14 +41,18 @@ fn check_requirements(
|
|||
) {
|
||||
for (id, requirement) in requirements {
|
||||
if allowed_requirements.is_match(id) {
|
||||
let status = if test_results.contains(&format!("{id} succeeded")) {
|
||||
let status = if test_results.contains(&format!("{} succeeded", id.trim())) {
|
||||
":white_check_mark:"
|
||||
} else if test_results.contains(&format!("{id} failed")) {
|
||||
} else if test_results.contains(&format!("{} failed", id.trim())) {
|
||||
":x:"
|
||||
} else {
|
||||
":warning:"
|
||||
};
|
||||
output.push(format!("- _{id}_ - {}: {status}", requirement.name));
|
||||
output.push(format!(
|
||||
"- _{}_ - {}: {status}",
|
||||
id.trim(),
|
||||
requirement.name
|
||||
));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -83,7 +87,12 @@ fn check_topics(
|
|||
{
|
||||
continue;
|
||||
}
|
||||
output.push(format!("{} _{id}_ - {}", "#".repeat(level), topic.name));
|
||||
output.push(format!(
|
||||
"{} _{}_ - {}",
|
||||
"#".repeat(level),
|
||||
id.trim(),
|
||||
topic.name
|
||||
));
|
||||
if !topic.requirements.is_empty() {
|
||||
check_requirements(
|
||||
test_results,
|
||||
|
|
@ -109,18 +118,23 @@ fn check_topics(
|
|||
fn add_requirements(output: &mut Vec<String>, requirements: &IndexMap<String, Requirement>) {
|
||||
for (id, requirement) in requirements {
|
||||
output.push(format!(
|
||||
"- **_{id}_ - {}:** {}",
|
||||
"- **_{}_ - {}:** {}",
|
||||
id.trim(),
|
||||
requirement.name.trim(),
|
||||
requirement.description.trim()
|
||||
));
|
||||
for info in &requirement.additional_info {
|
||||
output.push(format!(" - {}", info.trim(),));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn add_topics(output: &mut Vec<String>, topics: &IndexMap<String, Topic>, level: usize) {
|
||||
for (id, topic) in topics {
|
||||
output.push(format!(
|
||||
"{} _{id}_ - {}",
|
||||
"{} _{}_ - {}",
|
||||
"#".repeat(level),
|
||||
id.trim(),
|
||||
topic.name.trim()
|
||||
));
|
||||
if !topic.requirements.is_empty() {
|
||||
|
|
@ -141,6 +155,9 @@ enum Command {
|
|||
Markdown {
|
||||
requirements: PathBuf,
|
||||
},
|
||||
Html {
|
||||
requirements: PathBuf,
|
||||
},
|
||||
Check {
|
||||
#[arg(short, long, default_value = "REQ-.*")]
|
||||
allowed_requirements: String,
|
||||
|
|
@ -155,91 +172,113 @@ struct Args {
|
|||
command: Command,
|
||||
}
|
||||
|
||||
fn parse(value: &str) -> anyhow::Result<Project> {
|
||||
Ok(serde_yaml::from_str(value)
|
||||
.or_else(|_| serde_json::from_str(value))
|
||||
.or_else(|_| rsn::from_str(value))
|
||||
.or_else(|_| toml::from_str(value))?)
|
||||
}
|
||||
|
||||
fn to_markdown(requirements: PathBuf) -> anyhow::Result<String> {
|
||||
let project: Project = parse(&std::fs::read_to_string(requirements)?)?;
|
||||
|
||||
let mut output = vec![
|
||||
format!("# Requirements for {}", project.name.trim()),
|
||||
nl(),
|
||||
s!("[[_TOC_]]"),
|
||||
nl(),
|
||||
WORD_DESCRIPTION.trim().to_string(),
|
||||
nl(),
|
||||
format!("**VERSION: {}**", project.version),
|
||||
nl(),
|
||||
s!("## Description"),
|
||||
project.description.trim().to_string(),
|
||||
nl(),
|
||||
];
|
||||
|
||||
if !project.topics.is_empty() {
|
||||
output.push(s!("## Requirements"));
|
||||
add_topics(&mut output, &project.topics, 3);
|
||||
}
|
||||
|
||||
if !project.definitions.is_empty() {
|
||||
output.push(s!("## Definitions"));
|
||||
for definition in project.definitions {
|
||||
output.push(format!(
|
||||
"- {}: {}",
|
||||
definition.name.trim(),
|
||||
definition.value.trim()
|
||||
));
|
||||
for info in definition.additional_info {
|
||||
output.push(format!(" - {}", info.trim()))
|
||||
}
|
||||
}
|
||||
output.push(nl());
|
||||
}
|
||||
|
||||
if !project.config_defaults.is_empty() {
|
||||
output.push(s!("## Config Defaults"));
|
||||
for default in project.config_defaults {
|
||||
output.push(format!("- **{}**", default.name.trim()));
|
||||
output.push(format!(" - Type: {}", default.typ.trim()));
|
||||
if let Some(unit) = default.unit {
|
||||
output.push(format!(" - Unit: {}", unit.trim()));
|
||||
}
|
||||
if let Some(valid_values) = default.valid_values {
|
||||
output.push(format!(
|
||||
" - Valid Values: _{}_",
|
||||
valid_values.join(", ").trim()
|
||||
));
|
||||
}
|
||||
if let Some(default_value) = default.default_value {
|
||||
output.push(format!(
|
||||
" - Default Value: _{}_{}",
|
||||
default_value.trim(),
|
||||
default
|
||||
.hint
|
||||
.map(|h| format!(" {}", h.trim()))
|
||||
.unwrap_or_default()
|
||||
));
|
||||
} else {
|
||||
output.push(format!(
|
||||
" - **Required**: This value **_MUST_** be provided as a start parameter.{}",
|
||||
default
|
||||
.hint
|
||||
.map(|h| format!(" {}", h.trim()))
|
||||
.unwrap_or_default()
|
||||
));
|
||||
}
|
||||
output.push(nl());
|
||||
}
|
||||
}
|
||||
|
||||
let mut output = output.join("\n");
|
||||
for word in HIGHLIGHTED_WORDS {
|
||||
output = output.replace(word, &format!("**_{}_**", word.to_uppercase()));
|
||||
}
|
||||
Ok(output)
|
||||
}
|
||||
|
||||
fn main() -> anyhow::Result<()> {
|
||||
let Args { command } = Args::parse();
|
||||
match command {
|
||||
Command::Demo => {
|
||||
println!("{}", serde_yaml::to_string(&demo_project())?);
|
||||
}
|
||||
Command::Html { requirements } => {
|
||||
let output = to_markdown(requirements)?;
|
||||
println!(
|
||||
"{}",
|
||||
markdown::to_html_with_options(&output, &markdown::Options::gfm())
|
||||
.map_err(|e| anyhow::anyhow!("{e}"))?
|
||||
);
|
||||
}
|
||||
Command::Schema => {
|
||||
let schema = schema_for!(Project);
|
||||
println!("{}", serde_json::to_string_pretty(&schema).unwrap());
|
||||
}
|
||||
Command::Markdown { requirements } => {
|
||||
let project: Project = serde_yaml::from_str(&std::fs::read_to_string(requirements)?)?;
|
||||
|
||||
let mut output = vec![
|
||||
format!("# Requirements for {}", project.name.trim()),
|
||||
nl(),
|
||||
s!("[[_TOC_]]"),
|
||||
nl(),
|
||||
WORD_DESCRIPTION.trim().to_string(),
|
||||
nl(),
|
||||
format!("**VERSION: {}**", project.version),
|
||||
nl(),
|
||||
s!("## Description"),
|
||||
project.description.trim().to_string(),
|
||||
nl(),
|
||||
];
|
||||
|
||||
if !project.topics.is_empty() {
|
||||
output.push(s!("## Requirements"));
|
||||
add_topics(&mut output, &project.topics, 3);
|
||||
}
|
||||
|
||||
if !project.definitions.is_empty() {
|
||||
output.push(s!("## Definitions"));
|
||||
for definition in project.definitions {
|
||||
output.push(format!(
|
||||
"- {}: {}",
|
||||
definition.name.trim(),
|
||||
definition.value.trim()
|
||||
));
|
||||
for info in definition.additional_info {
|
||||
output.push(format!(" - {}", info.trim()))
|
||||
}
|
||||
}
|
||||
output.push(nl());
|
||||
}
|
||||
|
||||
if !project.config_defaults.is_empty() {
|
||||
output.push(s!("## Config Defaults"));
|
||||
for default in project.config_defaults {
|
||||
output.push(format!("- **{}**", default.name.trim()));
|
||||
output.push(format!(" - Type: {}", default.typ.trim()));
|
||||
if let Some(unit) = default.unit {
|
||||
output.push(format!(" - Unit: {}", unit.trim()));
|
||||
}
|
||||
if let Some(valid_values) = default.valid_values {
|
||||
output.push(format!(
|
||||
" - Valid Values: _{}_",
|
||||
valid_values.join(", ").trim()
|
||||
));
|
||||
}
|
||||
if let Some(default_value) = default.default_value {
|
||||
output.push(format!(
|
||||
" - Default Value: _{}_{}",
|
||||
default_value.trim(),
|
||||
default
|
||||
.hint
|
||||
.map(|h| format!(" {}", h.trim()))
|
||||
.unwrap_or_default()
|
||||
));
|
||||
} else {
|
||||
output.push(format!(
|
||||
" - **Required**: This value **_MUST_** be provided as a start parameter.{}",
|
||||
default.hint.map(|h| format!(" {}", h.trim())).unwrap_or_default()
|
||||
));
|
||||
}
|
||||
output.push(nl());
|
||||
}
|
||||
}
|
||||
|
||||
let mut output = output.join("\n");
|
||||
for word in HIGHLIGHTED_WORDS {
|
||||
output = output.replace(word, &format!("**_{}_**", word.to_uppercase()));
|
||||
}
|
||||
|
||||
let output = to_markdown(requirements)?;
|
||||
println!("{output}");
|
||||
}
|
||||
Command::Check {
|
||||
|
|
@ -249,7 +288,7 @@ fn main() -> anyhow::Result<()> {
|
|||
} => {
|
||||
let re = Regex::new(&allowed_requirements).unwrap();
|
||||
let test_results = std::fs::read_to_string(test_results)?;
|
||||
let project: Project = serde_yaml::from_str(&std::fs::read_to_string(requirements)?)?;
|
||||
let project: Project = parse(&std::fs::read_to_string(requirements)?)?;
|
||||
let mut output = vec![format!("# Test Results - {}", project.name)];
|
||||
check_topics(&test_results, &mut output, &project.topics, &re, 2);
|
||||
|
||||
|
|
|
|||
|
|
@ -1,36 +0,0 @@
|
|||
# Test Results - journal-uploader
|
||||
## _TOPIC-1_ - Journal Watcher
|
||||
### _SUB-1.1_ - File Monitoring
|
||||
- _REQ-1.1.1_ - Continuous Monitoring: :white_check_mark:
|
||||
|
||||
### _SUB-1.2_ - File Detection
|
||||
- _REQ-1.2.1_ - Detection of New Files: :white_check_mark:
|
||||
- _REQ-1.2.2_ - Avoid Re-processing: :x:
|
||||
|
||||
|
||||
## _TOPIC-2_ - Traced Logging
|
||||
### _SUB-2.1_ - File Processing
|
||||
- _REQ-2.1.1_ - Reading Log Messages: :white_check_mark:
|
||||
- _REQ-2.1.2_ - Filtering Log Messages: :white_check_mark:
|
||||
- _REQ-2.1.3_ - No Duplicate Log Messages: :x:
|
||||
|
||||
### _SUB-2.2_ - Traced Log Rotation
|
||||
- _REQ-2.2.1_ - Rotating Files: :white_check_mark:
|
||||
- _REQ-2.2.2_ - Compression of Rotated Files: :white_check_mark:
|
||||
- _REQ-2.2.3_ - Rotating Directory: :x:
|
||||
|
||||
|
||||
## _TOPIC-3_ - Remote Journal Logging
|
||||
### _SUB-3.1_ - Service Activation
|
||||
- _REQ-3.1.1_ - Cloud Activation: :white_check_mark:
|
||||
- _REQ-3.1.2_ - Duration: :white_check_mark:
|
||||
- _REQ-3.1.3_ - Max Interval: :x:
|
||||
- _REQ-3.1.4_ - Analytics Not Accepted: :white_check_mark:
|
||||
|
||||
### _SUB-3.2_ - File Processing
|
||||
- _REQ-3.2.1_ - File Upload: :white_check_mark:
|
||||
- _REQ-3.2.2_ - No Duplicate Files: :x:
|
||||
- _REQ-3.2.3_ - Revoking Analytics: :white_check_mark:
|
||||
- _REQ-3.2.4_ - Duration Expired: :warning:
|
||||
|
||||
|
||||
Loading…
Reference in New Issue