Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision | |||
wiki:aca2017:assign3 [2018/03/23 13:57] – [Sample Inputs/Networks] Andreas Moshovos | wiki:aca2017:assign3 [2018/03/23 13:58] (current) – [Sample Inputs/Networks] Andreas Moshovos | ||
---|---|---|---|
Line 85: | Line 85: | ||
Here are two python scripts that can read the above files: | Here are two python scripts that can read the above files: | ||
{{ : | {{ : | ||
+ | |||
+ | Here's further info from Milos: | ||
+ | |||
+ | The values are all stored in 16 bit int (if you load it in numpy, you will get an int). If you look at them in binary the first n+1 are the integer part, while the rest are the fraction bits, where n is precision I included in lists. For lenet layer one, input activations will be 2.14 and weights are 1.15. | ||
+ | |||
+ | |||
+ | |||
+ | The npy files contain python dictionaries with the values, layer names are the keys. After loading the files into variable var, each layer parameters are accessed as var[‘layer name’], this will be a 4D array. I never used it, but there is a git repo with code to load npy in c/c++ (https:// | ||
+ | ' | ||
+ | |||
+ | |||
+ | The numbers should be interpreted as fixed point values with the following format. The layers are in the same order as prototxt and the diagram and include only convolution and inner product layers. | ||
+ | |||
+ | |||
+ | |||
+ | Lenet layers: | ||
+ | |||
+ | conv1 conv2 ip1 ip2 | ||
+ | |||
+ | Lenet activations: | ||
+ | |||
+ | 2.14 | ||
+ | |||
+ | Lenet weights: | ||
+ | |||
+ | 1.15 | ||
+ | |||
+ | |||
+ | |||
+ | Nin layers: | ||
+ | |||
+ | conv1 cccp1 | ||
+ | |||
+ | Nin activations: | ||
+ | |||
+ | 11.5 | ||
+ | |||
+ | Nin weights: | ||
+ | |||
+ | 1.15 | ||
+ | |||
+ | |||
+ | |||
+ | Activations show the format of the input into the layer. The bias and output of the layer should follow the format of the following layer. Weights follow the format of the current layer. |