|
STM32N6 NPU Deployment — Politecnico di Milano
1.0
Documentation for Neural Network Deployment on STM32N6 NPU - Politecnico di Milano 2024-2025
|
Functions | |
| def | _parse_labels (str label_path) |
| def | _normalize_labels (label, int n, int l) |
| tf.data.Dataset | _get_path_dataset (str path, int seed, bool shuffle=True) |
| def | _get_padded_labels (data, r, R, height, width) |
| tuple[tf.Tensor, tf.Tensor] | _preprocess_function (tf.Tensor data_x, tf.Tensor data_y, tuple[int] image_size, str interpolation, str aspect_ratio, str color_mode, int nbr_keypoints) |
| Tuple[tf.data.Dataset, tf.data.Dataset] | _get_train_val_ds (str training_path, tuple[int] image_size=None, int nbr_keypoints=None, str interpolation=None, str aspect_ratio=None, str color_mode=None, float validation_split=None, int batch_size=None, int seed=None, bool shuffle=True, bool to_cache=False) |
| tf.data.Dataset | _get_ds (str data_path=None, tuple[int] image_size=None, int nbr_keypoints=None, str interpolation=None, str aspect_ratio=None, str color_mode=None, int batch_size=None, int seed=None, bool shuffle=False, bool to_cache=False) |
| Tuple[tf.data.Dataset, tf.data.Dataset, tf.data.Dataset] | load_dataset (str dataset_name=None, str training_path=None, str validation_path=None, str quantization_path=None, str test_path=None, float validation_split=None, int nbr_keypoints=None, tuple[int] image_size=None, str interpolation=None, str aspect_ratio=None, str color_mode=None, int batch_size=None, int seed=None) |
|
private |
Loads the images from the given dataset root directory and returns a tf.data.Dataset.
The dataset has the following directory structure (checked in parse_config.py):
dataset_root_dir:
image_1.jpg
image_1.txt
...
image_2.jpg
image_2.txt
Args:
data_path (str): Path to the directory containing the images.
image_size (tuple[int]): Size of the input images to resize them to.
nbr_keypoints (int): number of keypoints for a person
interpolation (str): Interpolation method to use when resizing the images.
aspect_ratio (bool): Whether or not to crop the images to the specified aspect ratio.
color_mode (str): Color mode to use for the images.
batch_size (int): Batch size to use for the dataset.
seed (int): Seed to use for shuffling the data.
shuffle (bool): Whether or not to shuffle the data.
to_cache (bool): Whether or not to cache the dataset.
Returns:
tf.data.Dataset: Dataset containing the images.
Definition at line 284 of file data_loader.py.
References _get_path_dataset(), and _preprocess_function().
Referenced by load_dataset().
|
private |
Definition at line 116 of file data_loader.py.
Referenced by _preprocess_function().
|
private |
Creates a tf.data.Dataset from a dataset root directory path.
The dataset has the following directory structure (checked in parse_config.py):
dataset_root_dir:
image_1.jpg
image_1.txt
...
image_2.jpg
image_2.txt
Args:
path (str): Path of the dataset folder.
seed (int): seed when performing shuffle.
shuffle (bool): Shuffle the dataset.
Returns:
dataset(tf.data.Dataset) -> dataset with a tuple (path, label) of each sample.
Definition at line 67 of file data_loader.py.
References _normalize_labels().
Referenced by _get_ds(), and _get_train_val_ds().
|
private |
Loads the images under a given dataset root directory and returns training
and validation tf.Data.datasets.
The dataset has the following directory structure (checked in parse_config.py):
dataset_root_dir:
image_1.jpg
image_1.txt
...
image_2.jpg
image_2.txt
Args:
training_path (str): Path to the directory containing the training images.
image_size (tuple[int]): Size of the input images to resize them to.
nbr_keypoints (int): number of keypoints for a person
interpolation (float): Interpolation method to use when resizing the images.
aspect_ratio (bool): Whether or not to crop the images to the specified aspect ratio.
color_mode (str): Color mode to use for the images.
validation_split (float): Fraction of the data to use for validation.
batch_size (int): Batch size to use for training and validation.
seed (int): Seed to use for shuffling the data.
shuffle (bool): Whether or not to shuffle the data.
to_cache (bool): Whether or not to cache the datasets.
Returns:
Tuple[tf.data.Dataset, tf.data.Dataset]: Training and validation datasets.
Definition at line 203 of file data_loader.py.
References _get_path_dataset(), and _preprocess_function().
Referenced by load_dataset().
|
private |
Normalization of the labels -> same shape for every label regarding the number of ground truths
Args:
label (np.array): shape (ground_truths, 5+3*keypoints) ground truths present in the label file
l (int): shape (1, ) maximum number of ground truths present in a label file
n (int): shape (1, ) current number of ground truths present in this label file
Returns:
normalized_label (np.array) : shape (l, 5+3*keypoints) label with normalized shape
Definition at line 51 of file data_loader.py.
Referenced by _get_path_dataset().
|
private |
Parsing of the labels files
Args:
label_path (str): Path of the label file.
Returns:
ground_truths (np.array) : shape (ground_truths, 5+3*keypoints) ground truths present in the label file
Definition at line 32 of file data_loader.py.
|
private |
Load images from path and apply necessary transformations.
Definition at line 171 of file data_loader.py.
References _get_padded_labels().
Referenced by _get_ds(), and _get_train_val_ds().
| Tuple[tf.data.Dataset, tf.data.Dataset, tf.data.Dataset] data_loader.load_dataset | ( | str | dataset_name = None, |
| str | training_path = None, |
||
| str | validation_path = None, |
||
| str | quantization_path = None, |
||
| str | test_path = None, |
||
| float | validation_split = None, |
||
| int | nbr_keypoints = None, |
||
| tuple[int] | image_size = None, |
||
| str | interpolation = None, |
||
| str | aspect_ratio = None, |
||
| str | color_mode = None, |
||
| int | batch_size = None, |
||
| int | seed = None |
||
| ) |
Loads the images from the given dataset root directories and returns training,
validation, and test tf.data.Datasets.
The datasets have the following directory structure (checked in parse_config.py):
dataset_root_dir:
image_1.jpg
image_1.txt
...
image_2.jpg
image_2.txt
Args:
dataset_name (str): Name of the dataset to load.
training_path (str): Path to the directory containing the training images.
validation_path (str): Path to the directory containing the validation images.
quantization_path (str): Path to the directory containing the quantization images.
test_path (str): Path to the directory containing the test images.
validation_split (float): Fraction of the data to use for validation.
nbr_keypoints (int): number of keypoints for a person
image_size (tuple[int]): resizing (width, height) of input images
interpolation (str): Interpolation method to use when resizing the images.
aspect_ratio (bool): Whether or not to crop the images to the specified aspect ratio.
color_mode (str): Color mode to use for the images.
batch_size (int): Batch size to use for the datasets.
seed (int): Seed to use for shuffling the data.
Returns:
Tuple[tf.data.Dataset, tf.data.Dataset, tf.data.Dataset]: Training, validation, and test datasets.
Definition at line 352 of file data_loader.py.
References _get_ds(), and _get_train_val_ds().
Referenced by preprocess.preprocess().