Skip to content
Snippets Groups Projects
Commit 41e3a87b authored by Yifan Zhao's avatar Yifan Zhao
Browse files

Added readme

parent 452c9a46
No related branches found
No related tags found
No related merge requests found
# PyTorch DNNs for NVDLA
This folder contains 2 DNNs defined in PyTorch that can be exported to NVDLA via HPVM:
- MiniERA DNN: a 4-layer, AlexNet-like architecture for image classification, and
- YOLO: a modified [Tiny-YOLO-v2](https://pjreddie.com/darknet/yolov2/) for object detection.
Scripts for generating these models are `./miniera.py` and `./yolo.py`, respectively.
(**Note**: all scripts should be run after HPVM is built, and within the Python virtual environment that HPVM is built with.)
## MiniERA DNN
Usage: run `python ./miniera.py`. This generates a `./gen_miniera/` folder which contains two important entries:
- `./gen_miniera/hpvm-mod.nvdla`: this is *the* generated NVDLA model.
- `./gen_miniera/images/`: some images extract from the model's dataset that the model can be run with.
Copy these files to a chip with NVDLA where `nvdla_runtime` is installed;
then the model can be run with the standard procedure.
## Tiny-YOLO-v2
Usage: run `python ./yolo.py`.
Similarly, this generates a `./gen_yolo/` folder which contains the NVDLA model (`hpvm-mod.nvdla`)
and sample images (`images/`).
However, please **note** that this model does not contain a postprocessing function that is necessary for all object detection models (as NVDLA doesn't support the operators used there).
Instead, `./host_code/yolo.cpp` implements this postprocessing in Eigen and runs on CPU.
We precompiled it into `./host_code/host_bin` for RISC-V,
which can be copied and used directly without additional dependencies.
Full workflow:
1. Use `python ./yolo.py` to generate the model into `./gen_yolo/`;
2. Copy `./gen_miniera/hpvm-mod.nvdla`, `./gen_miniera/images` and `./host_code/host_bin`
to an NVDLA-enabled device with RISC-V CPU;
3. Locate a float number (call it `$scale`) in `./gen_miniera/calib.txt`,
which is the int8->float32 scale in the model's output;
- (We got 0.44269490251707744 and it should always be close to this value)
3. Run the model with `./nvdla-runtime --loadable hpvm-mod.nvdla --image <image_name> --rawdump`
(`--rawdump` is important);
4. Make sure that an `output.dimg` in plain text format is generated;
5. Run the postprocessing `host_bin` with the scale: `./host_bin $scale`.
- This will print out all the boxes located in the image.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment