- Jul 12, 2021
-
-
Peter Pao-Huang authored
-
- Jul 08, 2021
-
-
Yifan Zhao authored
-
Yifan Zhao authored
-
Peter Pao-Huang authored
-
- Oct 27, 2020
-
-
Guy Jacob authored
-
- May 11, 2020
-
-
Guy Jacob authored
-
- Apr 30, 2020
- Apr 28, 2020
- Apr 27, 2020
-
-
Neta Zmora authored
See issue #444
-
Neta Zmora authored
-
Neta Zmora authored
-
Soumendu Kumar Ghosh authored
* Merge pytorch 1.3 commits This PR is a fix for issue #422. 1. ImageNet models usually use input size [batch, 3, 224, 224], but all Inception models require an input image size of [batch, 3, 299, 299]. 2. Inception models have auxiliary branches which contribute to the loss only during training. The reported classification loss only considers the main classification loss. 3. Inception_V3 normalizes the input inside the network itself. More details can be found in @soumendukrg's PR #425 [comments](https://github.com/NervanaSystems/distiller/pull/425#issuecomment-557941736). NOTE: Training using Inception_V3 is only possible on a single GPU as of now. This issue talks about this problem. I have checked and this problem persists in torch 1.3.0: [inception_v3 of vision 0.3.0 does not fit in DataParallel of torch 1.1.0 #1048](https://github.com/pytorch/vision/issues/1048 ) Co-authored-by:
Neta Zmora <neta.zmora@intel.com>
-
- Apr 22, 2020
-
-
Neta Zmora authored
shorten the TOC
-
Neta Zmora authored
shorten the basic version
-
- Apr 21, 2020
-
-
Neta Zmora authored
Add hsi-toolbox
-
Neta Zmora authored
Added TorchFI citation
-
- Apr 20, 2020
-
-
Neta Zmora authored
This script shows how to specify a compression-schedule directly using Distiller's API, instead of using a YAML specification examples/scheduling_api/direct_api_pruning.py
-
Neta Zmora authored
Added masking primitives: -mask_tensor -create_mask_threshold_criterion -create_mask_level_criterion -create_mask_sensitivity_criterion These APIs have a clearer name and communicate their responsibility better: create a tensor mask, based on some criterion. Previously, distiller.pruning.create_mask_threshold_criterion was named distiller.threshold_mask which did not communicate well what this function did. Masking functionality is no longer hidden inside the Pruner instances, so they can be used directly by an application, or to compose new Pruner classes. Removed file distiller.pruning.pruner: -The base-class _ParameterPruner is useless and adds needless details to the implementation. AGP: Separated the pruning-rate schedule from the rest of the logic. This allows us to mix-and-match different pruning-rate schedules (just like LR schedulers).
-
- Apr 16, 2020
-
-
Neta Zmora authored
As requested in issue #496
-
- Apr 13, 2020
-
-
Neta Zmora authored
Added some more Distiller citations
-
Neta Zmora authored
-
Neta Zmora authored
Fix ToC Move "Built With" to the Acknowledgement section
-
Neta Zmora authored
Experiment with layout reformatting: shorten the Community section by making it foldable.
-
Neta Zmora authored
Experiment with layout reformatting: shorten the Community section by making it foldable.
-
- Apr 12, 2020
-
-
Neta Zmora authored
Added citations
-
Neta Zmora authored
change distiller citation as suggested in issue #492
-
Neta Zmora authored
Remove warning regarding Distiller release 0.3 (breaking backward compat)
-
- Mar 31, 2020
-
-
Guy Jacob authored
-
- Feb 26, 2020
-
-
Guy Jacob authored
The gitdb versioning issue is resolved internally in gitpython 3.1.0, so moving to that and removing specific gitb requirements
-
- Feb 23, 2020
-
-
levzlotnik authored
-
levzlotnik authored
-
levzlotnik authored
-
- Feb 17, 2020
-
-
Guy Jacob authored
-
Guy Jacob authored
* BUGFIX: Fixed wrong attribute name for zero-point in conversion of eltwise add/mult and concat * Add PyTorch PTQ convert for embedding (converted to FP32 embedding + quant op) * Fix conversion function to work with tuple/list model inputs
-
Guy Jacob authored
* Move image classification specific setup code to separate script at examples/classifier_compression/ptq_lapq.py * Make ptq_coordinate_search function completely independent of command line arguments * Change LAPQ command line args function to update existing pre-existing parser (changed CLAs perfix to 'lapq' for more clarity) * Enable LAPQ from compress_classifier.py (trigger with --qe-lapq) * Add pointers in documentation
-
- Feb 13, 2020
-
-
Guy Jacob authored
-