Skip to content

Commit

Permalink
Add a New Executable Module leads_vec_dp (#344)
Browse files Browse the repository at this point in the history
* Optimized dependency profiles. (#340)

* Added `leads_vec_dp`. (#340)

* Docs. (#340)

* Improved workflow. (#340)

* Updated pyproject.toml. (#340)

* Code reformatted. (#340)

* Code reformatted. (#340)

* Added dependency `pyyaml`. (#340)

* Docs. (#340)

* Supported inferences. (#340)

* Added an output for `InferredDataset.complete()`. (#340)

* Code reformatted. (#340)

* Added `latency_invalid()`. (#340)

* Bug fixed: new latency is negative. (#340)

* Code reformatted. (#340)

* Code reformatted. (#340)

* Bug fixed: avoided `nan`s. (#340)

* Bug fixed: avoided `nan`s. (#340)

* Supported visual data analysis. (#340)

* Added commands. (#340)

* Code reformatted. (#340)

* Docs. (#340)

* Added command `save-as`. (#340)

* Updated LEADS.pptx. (#316) (#340)

---------

Signed-off-by: Terry Fu <futerry@outlook.com>
  • Loading branch information
ATATC authored Aug 4, 2024
1 parent 1ec3660 commit c759554
Show file tree
Hide file tree
Showing 17 changed files with 242 additions and 56 deletions.
96 changes: 80 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -189,23 +189,23 @@ command, see [Environment Setup](#environment-setup).
pip install "leads[standard]"
```

If your platform does not support GPIO, use profile "no-gpio".

```shell
pip install "leads[no-gpio]"
```

If you only want the framework, run the following.

```shell
pip install leads
```

#### Verify
This table lists all installation profiles.

```shell
leads-vec info
```
| Profile | Content | For | All Platforms |
|----------------------|-----------------------------------------------------------------|--------------------------------------------------|---------------|
| leads | Only the framework | LEADS Framework | &check; |
| "leads[standard]" | The framework and necessary dependencies | LEADS Framework | &check; |
| "leads[gpio]" | Everything "leads[standard]" has plug `lgpio` | LEADS Framework | &cross; |
| "leads[vec]" | Everything "leads[gpio]" has plus `pynput` | LEADS VeC | &cross; |
| "leads[vec-no-gpio]" | Everything "leads[standard]" has plus `pynput` | LEADS VeC (if you are not using any GPIO device) | &check; |
| "leads[vec-rc]" | Everything "leads[standard]" has plus `"fastapi[standard]` | LEADS VeC Remote Analyst | &check; |
| "leads[vec-dp]" | Everything "leads[standard]" has plus `matplotlib` and `pyyaml` | LEADS VeC Data Processor | &check; |

### Arduino

Expand All @@ -225,6 +225,12 @@ the framework in your project.
leads-vec run
```

#### Verify

```shell
leads-vec info
```

#### Replay

```shell
Expand Down Expand Up @@ -356,12 +362,6 @@ automatically calculate the best factor to keep the original proportion as desig

### Remote Analyst

The remote analyst requires additional dependencies. Install them through the following command.

```shell
pip install "leads[all]"
```

```shell
leads-vec-rc
```
Expand Down Expand Up @@ -394,6 +394,14 @@ If not specified, all configurations will be default values.

To learn about the configuration file, read [Configurations](#configurations).

### Data Processor

```shell
leads-vec-dp path/to/the/workflow.yml
```

To learn more about workflows, read [Workflows](#workflows).

## Environment Setup

This section helps you set up the identical environment we have for the VeC project. A more detailed guide of
Expand Down Expand Up @@ -497,6 +505,62 @@ Note that a purely empty file could cause an error.
| `data_dir` | `str` | Directory for the data recording system | Remote | `"data"` |
| `save_data` | `bool` | `True`: save data; `False`: discard data | Remote | `False` |

## Workflows

This only applies to LEADS VeC Data Processor. Please find a more detailed version
[here](https://leads-docs.projectneura.org/en/latest/vec/index.html#workflows).

```yaml
dataset: "data/main.csv"
inferences:
repeat: 100 # default: 1
enhanced: true # default: false
assume_initial_zeros: true # default: false
methods:
- safe-speed
- speed-by-acceleration
- speed-by-mileage
- speed-by-gps-ground-speed
- speed-by-gps-position
- forward-acceleration-by-speed
- milage-by-speed
- milage-by-gps-position
- visual-data-realignment-by-latency

jobs:
- name: Task 1
uses: bake
- name: Task 2
uses: process
with:
lap_time_assertions: # default: []
- 120 # lap 1 duration (seconds)
- 180 # lap 2 duration (seconds)
vehicle_hit_box: 5 # default: 3
min_lap_time: 60 # default: 30 (seconds)
- name: Draw Lap 5
uses: draw-lap
with:
lap_index: 4 # default: -1
- name: Suggest on Lap 5
uses: suggest-on-lap
with:
lap_index: 4
- name: Draw Comparison of Laps
uses: draw-comparison-of-laps
with:
width: 0.5 # default: 0.3
- name: Extract Video
uses: extract-video
with:
file: rear-view.mp4 # destination to save the video
tag: rear # front, left, right, or rear
- name: Save
uses: save-as
with:
file: data/new.csv
```
## Devices Module
### Example
Expand Down
Binary file modified docs/LEADS.pptx
Binary file not shown.
13 changes: 8 additions & 5 deletions leads/data_persistence/analyzer/inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -245,7 +245,7 @@ def complete(self, *rows: dict[str, _Any], backward: bool = False) -> dict[str,
original_target = target.copy()
t_0, t = target["t"], base["t"]
for channel in self._channels:
if (new_latency := t_0 - t + base[f"{channel}_view_latency"]) > 0:
if (new_latency := t - t_0 - base[f"{channel}_view_latency"]) < 0:
continue
target[f"{channel}_view_base64"] = base[f"{channel}_view_base64"]
target[f"{channel}_view_latency"] = new_latency
Expand All @@ -272,8 +272,9 @@ def merge(raw: dict[str, _Any], inferred: dict[str, _Any]) -> None:
for key in inferred.keys():
raw[key] = inferred[key]

def _complete(self, inferences: tuple[Inference, ...], enhanced: bool, backward: bool) -> None:
def _complete(self, inferences: tuple[Inference, ...], enhanced: bool, backward: bool) -> int:
num_rows = len(self._raw_data)
num_affected_rows = 0
for i in range(num_rows - 1, -1, -1) if backward else range(num_rows):
for inference in inferences:
p, f = inference.depth()
Expand All @@ -287,7 +288,9 @@ def _complete(self, inferences: tuple[Inference, ...], enhanced: bool, backward:
InferredDataset.merge(row, self._inferred_data[j])
d.append(row)
if (r := inference.complete(*d, backward=backward)) is not None:
num_affected_rows += 1
InferredDataset.merge(self._inferred_data[i], r)
return num_affected_rows

@_override
def load(self) -> None:
Expand All @@ -311,20 +314,20 @@ def assume_initial_zeros(self) -> None:
injection["mileage"] = 0
InferredDataset.merge(row, injection)

def complete(self, *inferences: Inference, enhanced: bool = False, assume_initial_zeros: bool = False) -> None:
def complete(self, *inferences: Inference, enhanced: bool = False, assume_initial_zeros: bool = False) -> int:
"""
Infer the missing values in the dataset.
:param inferences: the inferences to apply
:param enhanced: True: use inferred data to infer other data; False: use only raw data to infer other data
:param assume_initial_zeros: True: reasonably set any missing data in the first row to zero; False: no change
:return: the number of affected rows
"""
for inference in inferences:
if not set(rh := inference.header()).issubset(ah := self.read_header()):
raise KeyError(f"Inference {inference} requires header {rh} but the dataset only contains {ah}")
if assume_initial_zeros:
self.assume_initial_zeros()
self._complete(inferences, enhanced, False)
self._complete(inferences, enhanced, True)
return self._complete(inferences, enhanced, False) + self._complete(inferences, enhanced, True)

@_override
def __iter__(self) -> _Generator[dict[str, _Any], None, None]:
Expand Down
23 changes: 18 additions & 5 deletions leads/data_persistence/analyzer/processor.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

from leads.data import dlat2meters, dlon2meters, format_duration
from leads.data_persistence.analyzer.utils import time_invalid, speed_invalid, mileage_invalid, latitude_invalid, \
longitude_invalid
longitude_invalid, latency_invalid
from leads.data_persistence.core import CSVDataset, DEFAULT_HEADER
from .._computational import sqrt as _sqrt

Expand Down Expand Up @@ -38,6 +38,9 @@ def __init__(self, dataset: CSVDataset) -> None:
self._gps_invalid_rows: list[int] = []
self._min_lat: float | None = None
self._min_lon: float | None = None
# visual
self._min_latency: float | None = None
self._max_latency: float | None = None

# process variables
self._laps: list[tuple[int, int, int, float, float]] = []
Expand Down Expand Up @@ -69,8 +72,7 @@ def unit(row: dict[str, _Any], i: int) -> None:
t = int(row["t"])
speed = row["speed"]
mileage = row["mileage"]
if time_invalid(t) or speed_invalid(
speed) or mileage_invalid(mileage):
if time_invalid(t) or speed_invalid(speed) or mileage_invalid(mileage):
self._invalid_rows.append(i)
return
if self._start_time is None:
Expand All @@ -94,6 +96,15 @@ def unit(row: dict[str, _Any], i: int) -> None:
self._min_lon = lon
self._gps_valid_count += 1
self._valid_rows_count += 1
# visual
latencies = [row[key] for key in ("front_view_latency", "left_view_latency", "right_view_latency",
"rear_view_latency") if key in row.keys()]
latency = min(latencies)
if not latency_invalid(latency) and (self._min_latency is None or latency < self._min_latency):
self._min_latency = latency
latency = max(latencies)
if not latency_invalid(latency) and (self._max_latency is None or latency > self._max_latency):
self._max_latency = latency

self.foreach(unit, False)
if self._valid_rows_count == 0:
Expand All @@ -107,7 +118,7 @@ def _hide_others(seq: _Sequence[_Any], limit: int) -> str:
return f"[{", ".join(map(str, seq[:limit]))}, and {diff} others]" if (diff := len(seq) - limit) > 0 else str(
seq)

def baking_results(self) -> tuple[str, str, str, str, str, str, str, str, str, str, str, str]:
def baking_results(self) -> tuple[str, str, str, str, str, str, str, str, str, str, str, str, str, str]:
"""
Get the results of the baking process.
:return: the results in sentences
Expand All @@ -129,7 +140,9 @@ def baking_results(self) -> tuple[str, str, str, str, str, str, str, str, str, s
f"v\u2098\u2090\u2093: {self._max_speed:.2f} KM / H",
f"v\u2090\u1D65\u1D4D: {self._avg_speed:.2f} KM / H",
f"GPS Hit Rate: {100 * self._gps_valid_count / self._valid_rows_count:.2f}%",
f"GPS Skipped Rows: {Processor._hide_others(self._gps_invalid_rows, 5)}"
f"GPS Skipped Rows: {Processor._hide_others(self._gps_invalid_rows, 5)}",
"Min Video Latency: N/A" if self._min_latency is None else f"Min Video Latency: {self._min_latency:.2f} MS",
"Max Video Latency: N/A" if self._max_latency is None else f"Max Video Latency: {self._max_latency:.2f} MS"
)

def erase_unit_cache(self) -> None:
Expand Down
14 changes: 9 additions & 5 deletions leads/data_persistence/analyzer/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,23 +9,27 @@ def time_invalid(o: _Any) -> bool:


def speed_invalid(o: _Any) -> bool:
return not isinstance(o, int | float) or o != o or o < 0
return not isinstance(o, int | float) or o < 0


def acceleration_invalid(o: _Any) -> bool:
return not isinstance(o, int | float) or o != o
return not isinstance(o, int | float)


def mileage_invalid(o: _Any) -> bool:
return not isinstance(o, int | float) or o != o
return not isinstance(o, int | float)


def latitude_invalid(o: _Any) -> bool:
return not isinstance(o, int | float) or o != o or not -90 < o < 90
return not isinstance(o, int | float) or not -90 < o < 90


def longitude_invalid(o: _Any) -> bool:
return not isinstance(o, int | float) or o != o or not -180 < o < 180
return not isinstance(o, int | float) or not -180 < o < 180


def latency_invalid(o: _Any) -> bool:
return not isinstance(o, int | float)


def distance_between(lat_0: float, lon_0: float, lat: float, lon: float) -> float:
Expand Down
4 changes: 3 additions & 1 deletion leads/data_persistence/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@
override as _override, Self as _Self, Iterator as _Iterator, Callable as _Callable, Iterable as _Iterable, \
Generator as _Generator, Any as _Any

from numpy import nan as _nan

from leads.types import Compressor as _Compressor, VisualHeader as _VisualHeader, VisualHeaderFull as _VisualHeaderFull
from ._computational import mean as _mean, array as _array, norm as _norm, read_csv as _read_csv, \
DataFrame as _DataFrame, TextFileReader as _TextFileReader
Expand Down Expand Up @@ -218,7 +220,7 @@ def __iter__(self) -> _Generator[dict[str, _Any], None, None]:
except StopIteration:
break
for i in range(len(chunk)):
r = chunk.iloc[i].to_dict()
r = chunk.iloc[i].replace(_nan, None).to_dict()
if self._contains_index:
r.pop("index")
yield r
Expand Down
4 changes: 2 additions & 2 deletions leads/dt/registry.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ def register_controller(tag: str, c: Controller, parent: str | None = None) -> N


def has_controller(tag: str) -> bool:
return tag in _controllers.keys()
return tag in _controllers


def get_controller(tag: str) -> Controller:
Expand All @@ -79,7 +79,7 @@ def _register_device(prototype: type[Device],


def has_device(tag: str) -> bool:
return tag in _devices.keys()
return tag in _devices


def get_device(tag: str) -> Device:
Expand Down
4 changes: 2 additions & 2 deletions leads_vec/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -214,9 +214,9 @@ def render(manager: ContextManager) -> None:
if cfg.comm_stream:
manager["comm_stream_status"] = _Label(root, text="STM OFFLINE", text_color="gray",
font=("Arial", cfg.font_size_small))
i = 0
j = 0
for system in SystemLiteral:
i += 1
j += 1
system_lower = system.lower()
manager[f"{system_lower}_status"] = _Label(root, text=f"{system} READY", text_color="green",
font=("Arial", cfg.font_size_small))
Expand Down
2 changes: 2 additions & 0 deletions leads_vec/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ def run(config: str | None, devices: str, main: str, register: _Literal["systemd
_create_service()
_L.debug("Service registered")
_L.debug(f"Service script is located at \"{_abspath(__file__)[:-6]}_bootloader/leads-vec.service.sh\"")
return 0
case "config":
if _exists("config.json"):
r = input("\"config.json\" already exists. Overwrite? (Y/n) >>>").lower()
Expand All @@ -26,6 +27,7 @@ def run(config: str | None, devices: str, main: str, register: _Literal["systemd
with open("config.json", "w") as f:
f.write(str(Config({})))
_L.debug("Configuration file saved to \"config.json\"")
return 0
case "reverse_proxy":
from ._bootloader import start_frpc as _start_frpc

Expand Down
13 changes: 13 additions & 0 deletions leads_vec_dp/__entry__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
from argparse import ArgumentParser as _ArgumentParser
from sys import exit as _exit

from leads_vec_dp.run import run


def __entry__() -> None:
parser = _ArgumentParser(prog="LEADS VeC DP",
description="Lightweight Embedded Assisted Driving System VeC Data Processor",
epilog="GitHub: https://github.com/ProjectNeura/LEADS")
parser.add_argument("workflow", help="specify a workflow file")
args = parser.parse_args()
_exit(run(args.workflow))
7 changes: 7 additions & 0 deletions leads_vec_dp/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
from importlib.util import find_spec as _find_spec

if not _find_spec("yaml"):
raise ImportError("Please install `pyyaml` to run this module\n>>>pip install pyyaml")

from leads_vec_dp.__entry__ import __entry__
from leads_vec_dp.run import *
4 changes: 4 additions & 0 deletions leads_vec_dp/__main__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
from leads_vec_dp.__entry__ import __entry__

if __name__ == "__main__":
__entry__()
Loading

0 comments on commit c759554

Please sign in to comment.